Archive for: July, 2012

Systems Theory... well, Systems Theory Light.

Jul 17 2012 Published by under Uncategorized

When I was in grad school, all of my course work was focused on dynamical systems. The simplest, canonical example of this is a mass on a spring. Hang a spring from a hook, hang a weight from the spring. The gravity exerts a downward force on the mass, and the spring has a constant which acts opposite the direction of extension or compression. Thus, when you pull a spring, it pulls back, but when you  compress a spring, it pushes back (as opposed to, say, a slinky, which will collapse all the way).

Force, you may recall, acts to accelerate masses. Acceleration is the second derivative of position. So, if we want to write an equation which describes where the mass is at any given point in time, we use Newtonian motion, which is good enough for 99.9999% of the engineering systems in the world. And while Leibnitz came up with better notation than Newton did, it's not as amenable to typing things out in text, so I'll use Newtonian notation as well.

So, Newton tells us that Force is equal to Mass times Acceleration. Also, the force exerted by a spring is equal to a negative constant k times the distance the spring is stretched. Or F=ma=-kx. Well, using ' to indicate derivatives, this means that F=m*x''(t)=-k*x(t). Or, the force exerted on an object is equal to the spring compression constant times the excursion of the spring. When we solve this equation for x(t) (which takes a little bit of doing, but not too much, and which I won't go into here) we get the equation of the harmonic oscillator, or x(t)=A*cos((2*pi/T)*t+phi), where A is the amplitude, T is the period, and phi is the angle of offset, or phase shift. A and phi are functions of the initial conditions (i.e.,how far was the spring extended in the first place, and did you just let go, or give it a push?), and T is a function of the spring constant and the mass.

Now, this all works for magic springs. Meaning, they don't lose any energy to friction to materials effects. In the real world, we need to include these, which we model as a damping function (like a shock absorber). For this, we add an additional force term, which is proportional not to the position, but to the velocity of the mass. Velocity is the first derivative of position. This looks like: F=-k*x(t)-c*x'(t)=m*x''(t). where c is a constant which indicates how much the system resists motion, called the damping constant.

We rearrange this to x''(t)+(c/m)*x'(t)+(k/m)*x(t)=0, and solve. There are various ways of rearranging the constants in order to derive specific meanings from them, like the damping ratio or the angular frequency (because mathematically, a pendulum is really the same thing as a top; that is, oscillating is essentially the same thing as spinning modulo a factor of i ~ very roughly speaking).

Frequently, we want to drive these systems to behave in a certain way, and then we'll end up with a system that looks like: x''(t)+a*x'(t)+b*x(t)=u(t), where u(t) is some function that we'd like to try to force the system to follow. This type of system is often called a "canonical second order system", because it's well understood and is useful for modeling everything from masses on springs to radio communication.  The math gets a little ugly if you don't do differential equations, and I'm not going to plough through it here. Suffice to say, that playing around with systems like this is how I got started in systems theory.

Everything I've done above is basically at the sophomore-or-junior-in-college level of engineering, or at least it was eighteen years ago. As we go in systems theory, we learn to do larger order systems (meaning derivatives greater than second), but to do them by decomposing the system into matrices and vectors, so that we replace higher derivatives with vector algebra. Turning differential equations into algebra is essentially how dynamical systems get solved, by using Laplace transforms, or in the discrete case, z-transforms.

I don't do any of that anymore, and I never did any professionally. The reason I write about it is to point out some of the basic concepts of systems theory that I do use today, and those basic concepts, which I'll try to go into more tomorrow, are the interconnectedness of things, and the utility of rates and relationships in modeling things. At no time, in explaining the above example, did it matter how big the mass was, or how strong the spring or the damping agent. Those are just numbers, and numbers are the least interesting thing about the problem.

The thing that matters is the relationships between the things. The equation of motion, F=ma, which is a straightforward, linear equation which at first blush would seem to have very little to do with waves, and less to do with spinning. All it means is that if I push something, it'll go a little faster. But when we look more deeply into the arrangements of objects, and how they interact, we end up with behaviors which are very interesting: vibration, rotation. Which lead to music and electricity and other things which, from humble beginnings reveal fascinating phenomena.

That's systems theory: the combination of small objects and simple rules to model and understand and predict very complex behaviors. Tomorrow I'll get in to how we use this to model more exciting systems, like a medical clinic or something.

 

One response so far

Job Potential.

Jul 17 2012 Published by under Uncategorized

One of the things about being on soft money and towards the end of a grant is that I don't know how long I'm going to have a job where I am. Currently I hold two posts: one in a hospital's research department, and one as an adjunct assistant professor in a medical school (which is not associated with my hospital). My funding in both places is about to run out. Obviously, I have grants in the pipeline, and more that I'm about to write. Specifically, I have an R01 equivalent to write that I am currently working on an R03 equivalent generating preliminary data for. I think I have a high hit potential on that if my papers currently under review are accepted. I'll be able to point to some legitimate success to build on.

But, it would be irresponsible of me not to be considering other options right now. The problem is that in order to get academic research positions, you need funding. To get funding, you need a position. So, I'm basically best where I am unless I have something fall out of the sky on me.

Not too long ago, something fell out of the sky on me. I got an email from a headhunter looking for an assistant professor doing health systems engineering. It's a tenure track position, pure fantasy job. The only problem is that it's far away, in a strange land where I know no one and do not know the culture. Nevertheless, I am intrigued. I work doing health care engineering, and I need to be honest with myself about my potential to do regular engineering. I'm not going to be competitive for a job as a professor of engineering. I'm not good enough to do the theoretical work. And I don't think I'd like it. But I'm a good health care engineer, and I love applied work, and the sort of softer theory that goes in to model building.

And now this tenure track health care engineering position has fallen out of the sky at me. Which is really exciting. I have spoken to the headhunter a great deal, and was informed last night that my CV has beeen forwarded to the search committee, after the department head decided that I was indeed a qualified candidate. I'm really finding myself surprisingly attracted to the idea of this position. I'd be leaving everything I know behind, and launching into a strange new world, alone. Which is amazing and strange and exciting.

Of course it hasn't been offered yet. I don't know much about building my own research program. But I'd like to try. I have tended to succeed at things I dedicate myself to. Or at least, I've failed without falling apart. I'm beginning to think that I might like to give this a try.

3 responses so far

A Scientist Goes to His High School Reunion.

Jul 16 2012 Published by under Uncategorized

I spent Saturday engaged in vigorous outreach for the scientific community. By which I mean, I didn't conceal my job when I went to my high school reunion. It was a strange but encouraging night. I didn't drink in high school, and I didn't go to my 10 year reunion. So no one there knows that I have this twelve year deep hole in my life where I drank so miserably. I was surprised that people had a lot of really good things to say to me and about me. There were a few people that I simply had no memory of, but for the most part I recognized and enjoyed meeting everyone again.

One person I went to high school with is a highly placed figure now in the non-peer reviewed science publishing industry, and I told him I intended to follow up with him about possibly writing something for his publication. He seemed enthusiastic about that, but this is a dude who pals around with Neil DeGrasse Tyson, so I'm pretty small potatoes by comparison.  The really good writers, like Ed Yong, Emily Willingham, Carl Zimmer, etc., are too few and far between. Not that I think I could number with them, but I think it would be interesting to learn more about the science writing world, beyond the tiny corner where I blog about health care engineering.

But I was pleased to discover that the people in my high school graduating class seemed to have respect for science, seeing how they reacted to both the publisher and to me. There seemed to be genuine interest in how the world of science works, and what it's like to produce and publish science in the real world. It was exciting. It was nice to see that people who, back then, thought I was a "brainiac", "nerd", and "dweeb" now were impressed by my life of science.

Being a scientist is a good life. It's hard, the expectations that are placed on scientists these days, from living off grants to vanishing tenured positions to (as I can't remember who put it first) log-scale administrative bullshit. But the tangible rewards of science are real. We get to feel like we're doing good in the world, which will last beyond us. And that's exciting. I'm glad I'm a part of that.

One response so far

Newly Minted Epidemiologist.

Jul 13 2012 Published by under Uncategorized

Now that I've taken my course in evidence based decision making, and passed the final with a perfect 10/10, I think it ought to be pretty clear to everyone that I'm essentially now a PhD Epidemiologist. If you've been following along here with the descriptions of the course, that's worth at least an MPH. So congratulations to us all. Go forth, as I will, and demand special privileges relating to seating arrangements in restaurants and at friends' weddings. If they give you any static, gently press your index finger to their lips and say, in a calming voice, "Shhhh. Shhhh. I'm a doctor." Trust me. They'll come to see it your way.

I am now carefully spreading my pestilence at a major international airport, with the goal of seeing whatever cold virus I've got circle the globe by the third of August. The Scientopia folks assure me the virus DNA is 'tagged, bagged, and luminescent when bombarded by semi-polarized ultraviolet light'. So we'll be able to track it.

Now I'm on my way to my high school reunion. That's right. The high-school nerd returns to the beast's lair, equipped with doctorate, title, bibliography, and federal funding history. I couldn't wait to get the hell out of there when I graduated. The local newspaper did a little spread of all 200 or so graduating senior and asked each of us: "What do you think [my town] will be like ten years from now?" My response, which generated no small bit of townsfolk indignation, was, "Who cares?"

I'm not going to try to flaunt any credentials at people, of course. That's arrogant and stupid and certainly wouldn't impress anyway. But I doubt I'm alone in imagining my former tormentors awed by my success while they remain trapped in menial and manual labor positions. I'm not saying it's the most flattering portrait I'm painting of myself right now, but it's an honest one. Anyhow, this is all very confessional for this space, ad thus probably belongs over at Infactorium and not here.

I'm going to go back, see people I've not seen for a couple of decades now, and hope it's not as horrible as living among them was. That's not too much to ask for, is it?

One response so far

Reading the Medical Literature: Economic Evaluations.

Jul 12 2012 Published by under Uncategorized

Ok. I want to start off by saying that there must be something in the airducts here at the Scientopia underground lair/resort facility/staging area for the domination of the surface folk. And it isn't just the Genomic Repairman's most recent crime against nature and nature's god*. I'm come down with one of the many strains of biologic weaponry under development. I'm pretty sick and still on the downslope. I slept for like three and a half hours this afternoon, and I'm ready for bed again. So this is going to be a short post. I'm going to describe a few basic types of economic evaluation and then I'm going to take more alka seltzer cold and flu, which, to my knowledge, remains the only viable antidote to Drugmonkey's new hallucination powder, which is in the water here. I'm telling you, if this RSS feed is reaching the outside, send help! And Purina lemur chow!

So. Now that the housekeeping is out of the way, here's some stuff about economic evaluations, quickly, before the delirium sets in. In order for a study to be considered an economic analysis, two things must be compared. And the comparison must be made in terms of both outcomes, and costs. So, for example, two pills, with different costs, and different likelihoods for preventing an expensive adverse event like a stroke. Then, we can compare the costs of the intervention, and the costs associated with the adverse event at the rates we'd expect to find in the population based on adherence to a particular therapeutic regimen.

There are several types of cost studies, all similarly named and with small, subtle, and important differences. For example, a cost minimization analysis, which compares the costs of two or more equally effective interventions. A cost effectiveness analysis, which compares the cost and health consequences of two or more programs while expressing the the health consequences in a natural unit of cost per event, or cost per year of life. And a cost utility analysis, which is like cost effectiveness, but expressing the health consequence as a unit of time, rather than money, like Quality Adjusted Life Years (QALY).

A Quality Adjusted Life Year is a number between 0 and 1, where a 1 represents one year of perfect health, and 0 represents being dead. Sometimes there are negative QALYs, like when you're a victim of locked-in syndrome and being cared for by the people in "One Flew Over the Cuckoo's Nest".

Figure 1: Cost/Outcome Grid

So, what we'd love, in doing these analyses, is to find a bunch of interventions that go in the green field: lower cost, better outcomes. And we'll never accept anything in the red field, that being higher cost, worse outcomes. The yellow field we might occasionally accept, when it's a worse outcome, but a lower cost. If an intervention which costs almost nothing can replace an intervention that costs a huge amount, and have only a tiny bit worse outcome, that would be acceptable in a lot of cases. But most of the time, we live in the blue field, where we have to decide if a new, more expensive intervention justifies its higher cost with substantially better outcomes. According to class, the slop of that line is generally accepted to be, in the USA, at about $25K/QALY. This roughly corresponds to dialysis.

Cost-benefit analyses are often judged on two kinds of calculation schemes: the human capital scheme, where we measure the benefit of healthy contribution to the workforce, versus the cost of the intervention; and willingness to pay, where we measure what people say they would pay to avoid an adverse event, such as a stroke. Both of these have basic problems. In the former, people who don't contribute to the workforce are considered valueless. In the latter, people in difference circumstances will report vastly different values,and we can't be sure that if push came to shove, they'd actually do what they say, like pay $50K to not have a stroke, and plus that doesn't work anyway.

The big takeaway from today's class is that economic analyses are essentially make-believe. There's no real way to measure cost to society. So, if you really want to get fiction published in the peer-reviewed literature, call it "economic analysis". I respect that that's probably unfair. I'm sick, and Fonzie is hungry.

___________

*Seriously. This is a man working on making spitting cobras airborne and contagious. Yes, when venemous snakes spread like viruses, person to person, it's pretty awesome, and it elevates bio-warfare to an elegant new art. I grant you all that. But have you considered that the anti-bodies to these organisms aren't fully reproducible outside of lab-mice the size of rhinoceroses?

2 responses so far

Diagnostic Interventions.

Jul 11 2012 Published by under Uncategorized

Today we talk a little bit about diagnostic interventions. That is to say, medical tests! What are they good for, what do they mean, and how do we interpret them? I'm going to restrict myself to the discussion of tests which have two outcomes: positive and negative. Many tests in fact have many outcomes, which may refine diagnoses into various categories. But for our case, we have this:

Figure1: 2x2 for a diagnostic test.

In this case, the "Reference Standard" can mean a direct observation, through, for example, autopsy. This is how we measure if diagnostic tests work in the first place. We compare the test outcome which we believe to be positive or negative to a reference standard, which is perfect, but which we would like to replace with a test. After all, if a simple blood test can reliably diagnose a tumor, that save a patient from having to be diagnosed with an invasive biopsy or with an autopsy later. I want you to mentally label the boxes a, b, c, and d, horizontally and then vertically. We'll use those designations later (so, false negative is 'c').

When determining if a test is reliable, we look at the same three basic concepts we looked at for the previous posts: Validity, Results, and Applicability. We test validity of a test by the blinded comparison against a known reference standard, and the reference standard must be used in all subjects of the comparison. We can't only apply the reference standard to, for example, the positive test results. And if the test were to involve multiple outcomes, it must be compared to all possible disease states.

Tests have a couple of basic properties: sensitivity, specificity, and the likelihood ratio. Much is often made of the so called "positive predictive value". Forget about it. It's useless. The sensitivity of the test is how frequently a positive disease state yields a positive test. Note the temporal direction of action here. If we know you have the disease, you will be correctly classified with a positive test at the rate of sensitivity. The specificity of a test is the opposite of this. If we know you do not have the disease, you will be correctly classified with a negative test at the rate of specificity. Sensitivity is equal to a/(a+c). Specificity is equal to d/(b+d).

So what we'd like to know is, if I have a positive (negative) test, what's the probability that I (don't) have the disease? To do that, I need to calculate the likelihood ratio of the test. The first thing we need to know is that there are multiple likelihood ratios(LHR) for each test. In fact, there's one for each outcome. An LHR > 1 means that that outcome increases my odds of having the disease. An LHR < 1 decreases those odds. So, in order to use LHRs, we need to be able to calculate odds from probabilities.

Luckily, this is dead simple. If you have a probability, you can calculate odds very easily. Odds = Probability/(1-Probability). Conversely, Probability = Odds/(Odds+1). Feel free to check the algebra yourself. I'll wait. Really. I'll just be here twiddling my thumbs and debating group heritability with Fonzie the winged lemur. Fine. I knew you weren't going to do it.

We use LHRs because the tests aren't perfect, and because individual patients have different pre-test probabilities from each other, and also from the population used to validate the test. So, for a 2x2 test like we are discussing here, the LHR+ = Sensitivity/(1-Specificity), and the LHR-=(1-Sensitivity)/Specificity. Again, when the LHR >1 that increases the odds that the subject has the disease. It may be easier in this case to not thing of tests as having positive and negative results, but simply "result A" and "result B". Because sometimes, the "negative" test result is the one which indicates presence of the disease.

So, to use an example from class, consider a white blood count test for appendicitis which has 76% sensitivity (that is, people with appendicitis will have a positive test 76% of the time), and 52% specificity (which is, people without appendicitis will have a negative test 52% of the time). Suppose I go in to the physician with a painful belly. Sawbones thinks I have about a 20% chance of having appendicitis based on my presentation. That's pretty unlikely, but worth refining with a test. So, let's do some arithmetic.

Let's figure out my a priori odds. So, 0.20/(1-0.2) = .25. Now, let's calculate the LHRs of this test. The LHR+ = 0.76/(1-0.52) = 1.58. The LHR- = (1-0.76)/.52 = 0.46. So let's say my test was positive. If we're naive, we might think that a positive test with correctly classifies the disease state with 76% sensitivity, then I should have a 76% chance of having appendicitis. But that's switching the order of things. 76% of people with appendicitis will have a positive test. It does not follow that 76% of people with a positive test have appendicitis. The test isn't specific enough for that. Many things which are not appendicitis also yield positive test results.

So, we take my a priori odds, 0.25, multiply by the LHC+ (I had a positive test), and get 6.32. These are my a postiori odds. To find out how this has changed my likelihood of having the disease, We convert those odds back t a probability, and get 0.28, or 28%. So, this positive test, which has pretty good sensitivity, has only increased my probability of actually having appendicitis by 8%. Similarly, if you repeat this with a negative test and find that my chances would go from 20% to 10%.

However, suppose this test were better. Suppose instead of 76% sensitivity and 52% specificity, we have a test which is 85% sensitive and 90% specific. Now, what does a positive test mean, assuming I have the same 20% a priori probability. Well, my a priori odds are the same. .25. But my LHR+ = 0.85/(1-0.9) = 8.5. So a positive test means I multiply my odds by 8.5. Which means my a postiori odds are 34. Which yield an a postiori probability of 68%. With a test this good, and a 20% chance going in, Sawbones is prepping the OR. Similarly, LHR- = (1-0.85)/.9 = .167. So if I had a negative test, my odds go to .67, and my chances of having appendicitis are 4.2%. Go on home, Capt. Stomach Ache.

So, this is how we use diagnostic tests to inform medical decisions. Quiz your physician next time you're in the emergency room. Tell 'em you read it on a blog. Do it. I dare you*.

____________

*Dr24hours and Scientopia.org  assume no responsibility for adverse outcomes from this foolish, absurd, and inadvisable course of action. Don't piss off your doctor in the emergency room. Or elsewhere. At least, not with any expectation of relief from me. Or Scientopia. Or Fonzie.

 

2 responses so far

Perspectives on Perspective.

Jul 11 2012 Published by under Uncategorized

Hi. I'm Dr24hours, and I'm dying. Not rapidly, or soon, I hope. But also not quite in the same way as we're all dying, that is, the way that we're all mortal. I have a terminal, incurable, progressive mental illness. I am an alcoholic. My disease is in remission now. But I'm not so arrogant as to say that I am certain to remain in remission forever. Not many alcoholics find recovery, though it's more today than it used to be. Of those of us who do, many relapse. I don't have the numbers in front of me, and I'm not sure I trust all the published reports in any case. But my experience is that most alcoholics die of alcoholism. Which is to say, we die of complications associated with the chronic or acute intake of extraordinarily large quantities of alcohol.

Liver failure, or liver cancer. Pancreatitis. Esophageal cancer. Acute alcohol toxicity. Motor vehicle accidents. Domestic accidents. Suicide. Stroke. Heart failure. Kidney failure. Alcoholism kills us, usually slowly, in floridly myriad manners. We often take others with us, either to the grave or by destroying the lives of those around us through brutal, chronic anti-social toxicity.

We alcoholics are characterized in general (though of course not universally) by selfishness, isolation either physical or psychic, defensiveness, rage, childishness, and fear. Getting from there to recovery, to being able to be a useful and productive member of society, is a long and difficult road. In my case, it involved rehab, counseling, and attendance at AA meetings. I continue to go to meetings. I did the twelve steps. I have a sponsor. I continue to go to meetings.

I'm not going to rehash my journey from alcoholic misery to my current condition here. If you want to read about it, please come over to my personal blog at Infactorium, where I write about it with some regularity. As I said in my first post here, I have found my path to recovery in AA. I know it's not the only path to recovery. I know it's not for everyone. I'm not here to speak for AA or for the recovery community. I couldn't if I wanted to.

But I did want to talk a bit about the variety of reactions I've received, online and in person, to the revelation of my sobriety. The online reaction has been almost universally positive. People are kind, interested, friendly. I wonder sometimes if the impersonality of the online world allows people to be friendlier in this case. No one online has to worry about relying on me, there's no personal risk if I relapse. So there's no danger, no reason not to be fully supportive and engaged. I've also been surprised by just how many people have had personal experience with alcoholism, either in their family, their work, or in themselves. I've had no small number of people reach out to me privately asking for advice or help. Sometimes I can help. Sometimes I can't.

In person, the reactions are more mixed. I've had women I've dated decide not to date me any longer when they find out I'm in recovery, because they don't want the anxiety of being involved with an alcoholic, someone who could theoretically relapse and cause untold pain and misery in a relationship. I can't blame them for that. It hurts me, but I respect it. There are things I can't accept in a potential partner too. We all get to choose our own boundaries.

A few of my co-workers know my condition. Including my boss, who is a psychologist. He's fascinated by it. Very supportive. It never seemed to occur to him that relapse could cause a problem at work. I can't even remember how it came up any more. My boss keeps forgetting that I'm in recovery and has occasionally asked if I want to get a beer after work, before remembering. It's comfortable and friendly. I had another mentor be shocked that I went through graduate school as a drunk and still managed to graduate.

One of the meetings I used to go to a lot, but haven't been to recently, is an open meeting, meaning anyone from the community is welcome to attend. It's near the large school of social work near where I live, and so we regularly get students from the MSW program sitting in. It's part of their curriculum that they have to go to a couple of meetings. I have found them almost universally to be unbearably condescending. They say things like, "Good for you that you've managed all this!" when people talk about ordinary things.

For some of us, yes, just managing to get out of bed in the morning is an accomplishment. It's like that for all of us in the beginning of sobriety. But as time goes by, we return to society, to normalcy. We are simply men and women who have faced an addiction and come out on the other side. It's unremarkable. And it's amazing. Many ordinary things are amazing to me. But I don't deserve special credit for accomplishing ordinary tasks, like showing up to work. These young MSW students mean well. They're trying to be supportive. But it's grating. It comes off as a babysitter telling a precocious child that they're a good boy.

I'm fortunate. I found recovery prior to doing irreparable harm to my body. Before doing irreparable harm to my finances or employability. Before doing irreparable harm to my psyche.

Today, I have a great job. Well, I'm on soft money. But I love what I do. I am able to be of service to science and engineering. To health care, in a time when that's a matter of some critical national concern. I get to apply my skills to problems that will hopefully, eventually, help provide care for many people. Today, I run. I care about my own health, my own future. I am content, most of the time. Not always happy, but always grateful for where I am and what I have.

Today, I dedicate myself to my recovery. I've done the work on myself. I know why I drank, what I needed from alcohol. I know what I have to do to remain squarely centered. I keep my mind on my recovery and on the things I need to do to avoid the pitfalls of the mind and heart that might lead me back to alcohol. One of those things is helping others. People with addictions. People who love people with addictions. I welcome inquiries from people having trouble with alcohol, because helping those people helps me.

I live today in a place of gratitude. I am able to let go of things that trouble me when I don't benefit from hanging on to them. I am free to live day by day, looking to the future, informed by my past, but rooted in the present. I know how to flourish today, where before I merely persisted. I just survived. Now I thrive.

Hi. I'm Dr24hours. And I'm living.

5 responses so far

Reading the Medical Literature, Part II: Observational Studies.

Jul 10 2012 Published by under Uncategorized

Yesterday's post covered the big shiny limousine of medical evidence, the Randomized Clinical Trial, or the RCT. That's what we'd all love to see when we're looking for evidence based medicine. When done well, it's the best evidence there is in the medical sciences. It's how we determine, prospectively, if there are associations between drugs or procedures and outcomes. It's not perfect, nothing is, but RCT outcomes, when clear, are very clear.

But sometimes, RCTs aren't possible. For example, we need to use different methods to study adverse outcomes than we do to study the effects of medical interventions. Because it's unethical to take a big group of people, randomly expose half of them with some kind of toxin, and then prospectively observe the consequences. Yes, that would be very strong evidence. No, the medical community doesn't do that kind of thing (anymore). Luckily, there are strong methods for determining associations with adverse outcomes that don't require us to murder patients.

The first of these, the best evidence in this situation, is called the "Cohort Study". Cohort studies are essentially the same as RCTs except for the random part. Patients are sorted into treatment arms according to anything except a random choice. Cohort studies are good choices when there are ethical issues, or when the outcome is very rare, or requires a prolonged time to be exhibited after an exposure (like asbestos and mesothelioma). Cohort studies are at risk for a couple of types of basic bias: selection bias, or the risk of including a non-random sample of patients in the study (like a bunch of dock-workers for a mesothelioma study); and detection bias, where we may look for the outcome in an inappropriately restricted sample (such as looking for diabetes only in obese patients).

Case-Control studies are, at least as explained in class, retrospective. These are also inverted from the Cohort Study and the RCT. They don't sort patients into exposure groups and look for outcomes. They instead sort patients into outcome groups, and then look backwards for an exposure which might explain their outcome. This means that these studies have to be very careful about how they build their case group, and their control group. There needs to be very strict criteria for determining what is a case, and case reports are not allowed to be part of that sample (because exposure status is known). The case group and the control group need to be matched for attributes which are not a question of exposure. It is best to have the same number of cases and controls, but in rare cases this isn't necessarily possible. So to have enough statistical power, we will overmatch with multiple controls for each case.

Case-Control Studies (CCS) also have important biases to overcome. The first of these is recall bias. Often, CCSs involve interviewing. Patients in the case group may try harder to remember exposures which could explain their condition. Interviewers in the case group may press these patients harder as well, since it is known that they had the outcome of interest ("Are you certain you never worked around asbestos, Mr. Dockworker?"). Finally, there is protopathic bias, which is when the timing of two events appears to be out of order. For example, if my stomach hurts and I start taking a bunch of antacids, but they don't help. So six months later I go to the physician and am diagnosed with stomach cancer, it might appear that the antacids caused it.

It is also important to note that CCSs cannot generate incidence rates. Because the researchers select the cases and the controls, we cannot use this data to calculate the prevalence of the case in a larger population. We can only attempt to use the data to find an association between the known outcome and the exposure of interest. Frequency is not available to us.

We should also take this time to discuss confounders. Confounders are rare in RCTs because the randomization of the participants makes it unlikely that there's a strong preference in either the exposure group or the control group for any given factor. However, confounders may be common in studies where participants are not randomly assigned to study arms. A confounder is a a variable which is positively (or negatively) associated with both the independent and the dependant variable. So, if we look at a study which proper use of a medicine seems to reduce nursing home admissions among Alzheimer's patients, we need to consider the potential confounder of those participants who live with a caregiver. Because a caregiver is associated with both proper use of medication, and with being less likely to need nursing care. So the medication may appear to be reducing nursing home entrance rates, when in fact, it is the caregiver status which is fully responsible for the difference.

So, Cohort Studies and CCSs are both methods of observational research; one prospective, one retrospective. They're not quite as good as the RCT, because we have less assurance that the two study arms are statistically appropriate as experiment and control. We have to be more careful about potential biases, and we are at higher risk for confounding variables. So well designed studies which use these methods will be careful to examine and if necessary control for each of these potential difficulties.

In general, though, we ask the same questions about these studies as we did yesterday, about the RCT. Is the study valid? What are the results? And for whom are the results applicable? RCTs are excellent for examining interventions and potential new treatments. Cohort Studies and CCSs are the choice for studying etiology (especially temporally remote etiology) and adverse outcomes.

Finally today, an example from class about the necessity of control groups. The instructor, who is an internist from a rather prestigious Canadian medical school (I know, right?), was part of a study in which a group of patients with asthma was told that their medicine was being changed. They were then asked to describe the new medication's effects (better, worse, etc.). But in fact, the medicine was not changed. It was the exact same medicine in a different bottle. More than 75% of the study participants reported that the new medication was either better or worse than the old medication (with 50% saying it was better). I don't know how you get funding to study not changing a bunch of patients' asthma medication. But it's a really cool result.

So there we go. Observational studies. See you tomorrow with more from the salt mines. Now I'm going to go bask in the glorious pink light of Scientopia's private warm-fusion star on the coral sands island in the deep subterranean sea. It's glorious here. Fonzie is carrying on about optical physics with a raven who seems rather less than impressed...

2 responses so far

Introduction to Reading the Medical Literature: RCTs.

Jul 09 2012 Published by under Uncategorized

So now I'm going to write about my first day of class, and discuss what we learned about reading the medical literature. Specifically, how to read and understand a paper about a Randomized Clinical Trial (RCT) (also called a "randomized controlled trial".). But first, I'm going to back up and talk a little bit about how we interpret any paper on a therapeutic intervention. We need to ask ourselves three basic questions. Is the study valid? What are the results? How is the study applicable beyond the study itself?

Validity

Validity is the first concern. If a study isn't valid, we don't really need to ask the next two questions. We can break this up into four issues: treatment allocation, blinding, follow-up, and analysis. Allocation is the method for determining who was tested; who were the subjects of the study, and how were they compared with the control group (the group which did not receive the intervention for purposes of comparison)? We discussed four basic types of studies: randomized clinical trials, cohort studies, case-control studies, and case reports. I'll probably come back to discuss each of those over the course of this week, but for now, let's just say that I've ordered them in descending order of quality of evidence. And case reports are basically not evidence of anything; they're useful only for hypothesis generation, not drawing any conclusions.

The RCT is at the top of the heap. At least, it is when it's done well. Essentially, the sample population is passed through a sieve to include or exclude patients. Then, patients are randomly sorted into two groups, the treatment group, and the control group. Then, the outcomes from the two interventions are considered (figure 1).

Figure 1: RCT

Randomization can balance the prognostic factors between the two groups, and prevents bias in deciding which patients belong in which group. We do not, for example, want all the very sick patients in one group or the other. We're not guarenteed that the two groups will be identical. But we are very, very likely to have them be very similar if the sample size is large.

Blinding is done in three ways, and not always done in every trial. Randomization is its own blind. The process of selecting which group each subject is in should be an unbiased, unknown process, so that there is no reason to believe that the control group and the experimental group differ in any important way. The second type of blinding is the subjects. For many trials, it is possible to blind the patients to which group they are in. This is done by giving a placebo pill instead of a pill with medicine in it to the control group, for example. The patient doesn't know if they're getting medicine or not. However, sometimes it is impossible to blind the subjects to which group they are in, especially if it is a complex procedure with a recovery time.  Third, the providers can be blinded to the group. Again, this is not always possible. Surgeons generally know if they have performed surgery or not.

Finally the evaluators can be blinded. If the providers or patients cannot be blinded, it is valuable to have a third party evaluate the outcome of the study, so that they do not know if the patient they're reviewing has had the tested intervention. If the evaluators are not blinded to study group, it is best to only evaluate hard outcomes, rather than subjective ones. For example, living or dead, rather than "severity of pain".

Follow-up must be observed as well. Many patients are lost to follow-up. We'd like to know if they are similar between groups, or not. We'd like to know why they were lost to follow-up. And finally, we'd like to know if those lost to follow-up could have significantly changed the results of the study. So, we'd like to see a calculation that assesses: if ALL of the patients lost to follow-up had the least desirable outcome, would this make me change my conclusions about an intervention? For example, if a heart drug seems to prevent heart attacks, but 5% of my intervention group was lost to follow-up, I'd like to see a calculation determining the results if all of those patients had heart attacks. This is unlikely, yes. But it allows us to bound the effectiveness of the intervention given the missing patients.

Finally, the analysis. What about patients in the intervention group who don't adhere to the study? Are they evaluated in the intervention group, or the control group? Neither? What if some patients cross over between groups during the course of the study? Do the study authors use different analyses for determining benefits vs. risks? And what about sub-group analyses? Measuring the effects of intervention on sub-groups should be decided a priori, and not post-hoc based on observing an event cluster in a particular sub-group.

So, that's a list of things we need to think about when determining if a study has validity.

Results

To determine the results of the study, we want to look at three basic aspects: statistical significance, precision, and power. Statistical significance is based on hypothesis testing using statistical methods, which I'm not going to discuss. But essentially, they tell us, when looking at these two groups of patients, what is the probability that the differences in their outcomes is due to chance, and not due to the intervention? The industry standard is 5%. If there is a less than 5% chance (we write: "p<0.05") that the result is due to chance, then we reject the "null hypothesis" (that the two groups are the same), and accept the active hypothesis that the difference in the two groups is due to the intervention we instituted.

It's worth noting at this point that not all RCTs are placebo controlled. Many are controlled with "usual care". We can't, for example, test a new blood sugar controlling drug against nothing at all. The patients in the control group would still be allowed to use insulin, or their other blood sugar medication. It's unethical to withhold treatment just because the patients are in a study. So the usual care is the control, and the active hypothesis is that the intervention is superior to usual care (often abbrev. UC).

We also use 95% confidence intervals for statistical significance. When computing an odd ratio, for example, which allows us to say that the likelihood of an event is x% greater given an intervention, we calculate the statistical bounds on that value. The strict definition is that 95% of the point estimates from sample populations will fall within the interval. But the practical interpretation (because we can do only one RCT, generally, and certainly not many of them) is that given the estimate we have from the RCT on the event rate, we can be 95% certain that the "real" value is within the CI. Then, if the confidence interval includes the value for "no increased likelihood" (so, if the odds ratio is 1.3, for a 30% increase, and the 95% CI includes 1.0, for 0% increase), we say that the result is not statistically significant. If the CI does not include this value, then we say it is a significant finding.

Power was not discussed in class, and I freely admit to not understanding it terribly well. Essentially, well powered studies have enough patients in each group to trust the results. I'll add a clarification if we go into greater detail. I'm also sure someone like Mark CC could answer it in the comments if we bug him enough.

Applicability

Applicability was described as having four basic elements: intervention, patients, outcomes, and environmental factors. We'd like to know if the paper, the results of this study, are applicable in our own circumstances. Does this study, this intervention, apply to me? Or, if I were a physician, to my patients? We ask if the intervention is reproducible. Do we have similar products, similar exposure, and do we have similar skills required to implement the intervention? This may be especially important if the intervention is a surgery requiring equipment and training.

We ask if we, or our patients, would meet the study inclusion criteria. Suppose that blood sugar drug was not tested on diabetics with renal failure. Is there a reason? Is it safe to use on patients with renal failure? This will matter to me if I'm a nephrologist with a lot of diabetic patients. The whole population in the real world is different from the sample population, which is different from the study population. The only group we know for sure the study applies to is the study population.

We ask if the outcomes are clinically relevant. Did we use primary or surrogate outcomes? Primary outcomes for a diabetic might be vision loss, or renal failure. The surrogate outcome often used is hemoglobin a1c, which is a measure of average blood glucose content. We try to control the surrogate outcome because we know that it is highly associated with our primary outcomes. But this can fail. Because a drug which is beneficial to the surrogate could be negative to the primary through some other mechanism. We've seen this with drugs recalled from the market which may help control cholesterol, but actually increase cardiac mortality.

We ask if the benefits and risks were properly assessed. A drug which decreases risk of stroke might increase risk of fatal GI bleeds. We need to determine which of these effects dominates if we are going to recommend adopting the use of this drug. Maybe it decreases the risk of stroke by 10%, but increases the risk of GI bleed by 20%. Even then, that might be quite acceptable if the baseline risk of GI bleed is very low. We might be saving 10 strokes for every additional GI bleed.

Finally, we look at the environmental and other factors: what are the patients' and providers' expectations? These will enhance or temper the placebo effect. Are there behaviors relating to adherence and persistence? An intervention which is highly effective at full dosage, but which dramatically loses effectiveness with a single missed pill might not be worth the expense.

Conclusion

So, I hope the above has given a little primer on how to read and interpret an RCT. I was going to include my own analysis of a paper now, but GASCKHK. I'm at 1720 words already, and I have more work to do tonight. So, here's your homework: go find a paper. Read it, and report back to me on your findings. Is it valid? What are the results? And how applicable are the results to you or your patient population? These are the questions, apparently, that physicians and epidemiologists ask when reading the medical lit.

Tomorrow? No idea! I'm having a fantabulous time. Oh look: here's Fonzie, my flying helper-lemur, with a cup of hot tea and a biscuit.

3 responses so far

Good Morning from Scientopia!

Jul 09 2012 Published by under Uncategorized

Hi! My name is Dr24hours. You can, if you're interested, find me full time over at Infactorium, and on twitter at @Dr24hours. I've super excited to be able to write and share here, in this big shiny space (You should see it here, it's amazing! The Scientopia Campus is alarmingly beautiful, all white marble, buttonless touch-pads, and genetically engineered helper-lemurs. Seriously. It's like "Oryx and Crake" without the calamity.). So I guess I'll give you a brief introduction to me, my work, and my life.

I am a systems engineer. I work in the field of health care delivery, where my fundamental interest is in studying and improving the complex systems which deliver care. And of course, the complex systems which receive care (people and populations). And how those two systems interact. I am currently working on building a simulation model to examine how clinical policy influences outcomes. And conversely, how demographic and other changes in the population influence demand for care in the clinical systems that serve that population.

I am also an alcoholic in recovery. I have discovered that this is an issue which has touched nearly everyone in every community I've ever participated in, to one degree or another. I write about my alcoholism because doing so is part of my recovery. However, it is important to note from the outset that I am not an addiction scientist, I am not a physician, and I do not have medical training. Please take all statements I make about alcohol, alcoholism, and recovery as opinion. Opinion only.

The means of my recovery is the program of Alcoholics Anonymous. I know that some people find that group to be controversial. I do not speak for them. I am not an authority on AA, it's history, functioning, or organization (such as it is... it's a deliberately unorganized group.). I use that program and fellowship to stay sober. It works for me, as it has worked for millions of others. But there is no one way to recovery, and I am not here to debate the efficacy of AA vs. other methods or groups. I will write here about recovery and alcoholism, because it is a core aspect of my life. I'd also like to write at least one post about how we see alcoholics in our lives, and my experiences with people in science and academia upon their discovery of my condition. If I'm feeling really ambitious, I may to a post on interpreting some addiction literature from my perspective.

Which brings me to what I am going to spend my first week writing about. I am currently taking a short course in epidemiology, at a Midwestern school of public health. My class is on evidence-based decision making, and reading and understanding the medical literature. I received a small tuition and travel award to take a methodology course, and I decided to take this one because I work with physicians and epidemiologists, and I want to be able to understand their language and perspective better.

So I will be writing a few posts on interpreting the medical literature for non-physicians. Next week, I will likely be giving a primer on complex systems, and how they apply in the field of medicing. However, no ballet plan survives contact with the enemy (that's you!), so I can't say for certain how anything is going to go here. I know that I'm thrilled and humbled by this opportunity. I'd like to thank Doc Becca, and Scicurious specifically for encouraging me with this opportunity. By which I mean, when I came begging for the opportunity to write here, the mocking was (eventually) tempered with mercy.

I try to live one day at a time. Today I am taking the first class of the first course I've taken in about 7 years. Today I am beginning and exciting guest-blogging gig on a large and respected blogging network. Today, for the 1,604th day in a row, I've decided not to drink any alcohol. I don't know what tomorrow may bring, but my world is looking pretty wide-open. Welcome to my time at the Guest Blog. I hope you'll like what I do with the place.

10 responses so far

« Newer posts Older posts »