Yesterday's post covered the big shiny limousine of medical evidence, the Randomized Clinical Trial, or the RCT. That's what we'd all love to see when we're looking for evidence based medicine. When done well, it's the best evidence there is in the medical sciences. It's how we determine, prospectively, if there are associations between drugs or procedures and outcomes. It's not perfect, nothing is, but RCT outcomes, when clear, are very clear.
But sometimes, RCTs aren't possible. For example, we need to use different methods to study adverse outcomes than we do to study the effects of medical interventions. Because it's unethical to take a big group of people, randomly expose half of them with some kind of toxin, and then prospectively observe the consequences. Yes, that would be very strong evidence. No, the medical community doesn't do that kind of thing (anymore). Luckily, there are strong methods for determining associations with adverse outcomes that don't require us to murder patients.
The first of these, the best evidence in this situation, is called the "Cohort Study". Cohort studies are essentially the same as RCTs except for the random part. Patients are sorted into treatment arms according to anything except a random choice. Cohort studies are good choices when there are ethical issues, or when the outcome is very rare, or requires a prolonged time to be exhibited after an exposure (like asbestos and mesothelioma). Cohort studies are at risk for a couple of types of basic bias: selection bias, or the risk of including a non-random sample of patients in the study (like a bunch of dock-workers for a mesothelioma study); and detection bias, where we may look for the outcome in an inappropriately restricted sample (such as looking for diabetes only in obese patients).
Case-Control studies are, at least as explained in class, retrospective. These are also inverted from the Cohort Study and the RCT. They don't sort patients into exposure groups and look for outcomes. They instead sort patients into outcome groups, and then look backwards for an exposure which might explain their outcome. This means that these studies have to be very careful about how they build their case group, and their control group. There needs to be very strict criteria for determining what is a case, and case reports are not allowed to be part of that sample (because exposure status is known). The case group and the control group need to be matched for attributes which are not a question of exposure. It is best to have the same number of cases and controls, but in rare cases this isn't necessarily possible. So to have enough statistical power, we will overmatch with multiple controls for each case.
Case-Control Studies (CCS) also have important biases to overcome. The first of these is recall bias. Often, CCSs involve interviewing. Patients in the case group may try harder to remember exposures which could explain their condition. Interviewers in the case group may press these patients harder as well, since it is known that they had the outcome of interest ("Are you certain you never worked around asbestos, Mr. Dockworker?"). Finally, there is protopathic bias, which is when the timing of two events appears to be out of order. For example, if my stomach hurts and I start taking a bunch of antacids, but they don't help. So six months later I go to the physician and am diagnosed with stomach cancer, it might appear that the antacids caused it.
It is also important to note that CCSs cannot generate incidence rates. Because the researchers select the cases and the controls, we cannot use this data to calculate the prevalence of the case in a larger population. We can only attempt to use the data to find an association between the known outcome and the exposure of interest. Frequency is not available to us.
We should also take this time to discuss confounders. Confounders are rare in RCTs because the randomization of the participants makes it unlikely that there's a strong preference in either the exposure group or the control group for any given factor. However, confounders may be common in studies where participants are not randomly assigned to study arms. A confounder is a a variable which is positively (or negatively) associated with both the independent and the dependant variable. So, if we look at a study which proper use of a medicine seems to reduce nursing home admissions among Alzheimer's patients, we need to consider the potential confounder of those participants who live with a caregiver. Because a caregiver is associated with both proper use of medication, and with being less likely to need nursing care. So the medication may appear to be reducing nursing home entrance rates, when in fact, it is the caregiver status which is fully responsible for the difference.
So, Cohort Studies and CCSs are both methods of observational research; one prospective, one retrospective. They're not quite as good as the RCT, because we have less assurance that the two study arms are statistically appropriate as experiment and control. We have to be more careful about potential biases, and we are at higher risk for confounding variables. So well designed studies which use these methods will be careful to examine and if necessary control for each of these potential difficulties.
In general, though, we ask the same questions about these studies as we did yesterday, about the RCT. Is the study valid? What are the results? And for whom are the results applicable? RCTs are excellent for examining interventions and potential new treatments. Cohort Studies and CCSs are the choice for studying etiology (especially temporally remote etiology) and adverse outcomes.
Finally today, an example from class about the necessity of control groups. The instructor, who is an internist from a rather prestigious Canadian medical school (I know, right?), was part of a study in which a group of patients with asthma was told that their medicine was being changed. They were then asked to describe the new medication's effects (better, worse, etc.). But in fact, the medicine was not changed. It was the exact same medicine in a different bottle. More than 75% of the study participants reported that the new medication was either better or worse than the old medication (with 50% saying it was better). I don't know how you get funding to study not changing a bunch of patients' asthma medication. But it's a really cool result.
So there we go. Observational studies. See you tomorrow with more from the salt mines. Now I'm going to go bask in the glorious pink light of Scientopia's private warm-fusion star on the coral sands island in the deep subterranean sea. It's glorious here. Fonzie is carrying on about optical physics with a raven who seems rather less than impressed...