We compared three states that substantially expanded adult Medicaid eligibility since 2000 (New York, Maine, and Arizona) with neighboring states without expansions. The sample consisted of adults between the ages of 20 and 64 years who were observed 5 years before and after the expansions, from 1997 through 2007. The primary outcome was all-cause county-level mortality among 68,012 year- and county-specific observations in the Compressed Mortality File of the Centers for Disease Control and Prevention. Secondary outcomes were rates of insurance coverage, delayed care because of costs, and self-reported health among 169,124 persons in the Current Population Survey and 192,148 persons in the Behavioral Risk Factor Surveillance System.
This comes from the description of methods of a recent article in the New England Journal of Medicine. The article was mentioned by the New York Times on July 26. I like the plots in Figure 1.
The Times provides some context as to why it is seen as “controversial” whether Medicaid expansions positively impact health outcomes:
Medicaid expansions are controversial, not just because they cost states money, but also because some critics, primarily conservatives, contend the program does not improve the health of recipients and may even be associated with worse health. Attempts to research that issue have encountered the vexing problem of how to compare people who sign up for Medicaid with those who are eligible but remain uninsured. People who choose to enroll may be sicker, or they may be healthier and simply be more motivated to see doctors.
See also earlier post.
Andrew Gelman picks apart the study behind a front page New York Times story. The story also got a link (and an uncritical link at that) on Freakonomics.
Gelman seems flabbergasted at what he sees as a hyperbolic claim resulting from a poor design filtered through poor journalism. I’m surprised at the surprise. As one commentator “Mark” points out at Gelman’s blog,
Andrew, this is my area of research (public health), and I don’t think you’re missing anything, and I’m not the least bit surprised that it resulted in big headlines in the NYT. This happens ALL THE TIME. Recall the recent results from Harvard regarding red meat and cancer: http://www.nytimes.com/2012/03/13/health/research/red-meat-linked-to-cancer-and-heart-disease.html?_r=1. OK, this one wasn’t exactly front page in NYT, but still they were hyping another similarly hopelessly flawed study. And, it’s not too hard to find many more examples in the NYT, some of which certainly made the front page. See John Iaonnidis.
Another commentator “Jonathan” suggests, “Anyone know of any studies that look at impacts of new studies on behavior changes. It would also be interesting the difference before and after the introduction of the internet. If that makes sense.” I second that–can we get data on gym memberships and match it to NYT circulation??
Throughout the day, we’re bombarded with information about risk. Foods, medications, and lifestyle habits either reduce or increase our risk of a certain disease, say many studies.
That information can be valuable. But it is often misinterpreted by the media, consumers, and even doctors.
In the case of the chocolate study, the researchers never answered a key question: How likely is a person who doesn’t eat chocolate, or eats very little, to suffer from heart disease or stroke?
Without those numbers, consumers are in a situation that’s a little like having a 50 percent-off coupon without knowing the prices of anything in the store. You don’t want to use the coupon on a $1 item, saving only 50 cents, if you can use it on a $50 item and save $25 instead – but without those original prices, you’re flying blind.
The same is true for medical studies. In this story, we trace through a few hypothetical and real-life examples to show you how to better understand the concept of risk in medical studies aimed at consumers – starting with the chocolate study.
From this article in the Boston Globe. It mostly talks about interpreting “risk” studies, and it seems (from the correction notice and the hubbub in the comments) that the author actually messed up on some key numbers; but overall it makes a good point about the hype of science journalism and gives readers some rules of thumb to see through it.
I have no idea if this will be a regular feature–let’s hope so, since the Globe is frequently guilty of the very sort of reporting that’s criticized here.
In the past decade there has been a flurry of research into the effects of antidepressants on pregnancy – in particular, on selective seratonin reuptake inhibitors (SSRIs), which include Prozac and are the most commonly prescribed such drugs. So far, findings published on their possible effects have been all over the map – from increased likelihood of pre-term delivery to poor adaptation by newborns because of withdrawal symptoms from the drugs.
From an article in the July 25th Boston Globe. The gist: basically no evidence, just hype and lawyers trying to profit off if the hype. One thing left unclear for me is why anyone bothered
to look at this question in the first place. Theory?
Peter Kramer’s opinion piece in the July 9 New York Times is both a critique of studies purporting to show that anti-depressants “don’t work” and a critique of the media’s fascination with “debunking studies,” studies that contradict established views.
As for the news media’s uncritical embrace of debunking studies, my guess, based on regular contact with reporters, is that a number of forces are at work. Misdeeds — from hiding study results to paying off doctors — have made Big Pharma an inviting and, frankly, an appropriate target. (It’s a favorite of Dr. Angell’s.) Antidepressants have something like celebrity status; exposing them makes headlines.
Oh, and he also has a nice discussion of the conflicting research on the efficacy of anti-depressants.
On a related note, I just came across a blog called “Retraction Watch.” Apparently they’ve gotten some media attention.
RESEARCHERS AT the University of California, San Diego, use questionable data to make the flimsy argument that a few sips of alcohol are enough to significantly increase the likelihood of serious injury in an automobile accident (“Study: 1 drink raises driving risks,’’ g section, June 27).
Their data only reflect numbers for accidents in which there is a fatality. That’s only three-tenths of 1 percent of all car accidents – 34,172 out of 10.2 million. It’s like trying to discover how widespread steroid usage is and using Major League Baseball players as your sample.
I’m too tired to evaluate this. Any takers?
This New York Times editorial from yesterday morning is a great example of how not to write about a causal issue. The underlying causal claim is that graphic warnings on cigarette packages deter smoking. The single piece of supporting evidence, in five paragraphs, is that the World Health Organization did a study on this. No mention of anything more specific in terms of findings, let alone effect sizes (by how much does smoking decrease?) or research design. We are given the odd caveat at the end that “The evidence is not ironclad” since so many factors influence smoking rates. Sorry–what evidence was that?
Yes, one could read the WHO study and critique it. But, what about people who read the print edition and don’t get the link? Even online, who has the time for that? And the broader point is that the editorial is basically useless if to learn anything about this claim we are better off reading the original report instead.
Yes! screams this Boston Globe headline, “TV affects a child’s sleep patterns.”
Well, technically it’s probably true–I would wager there is at least one child in the world whose sleep patterns are affected by television, either negatively or positively. But for all kids? I’m not so sure.
Here’s the meat of the article:
Researchers from the Seattle Children’s Research Institute and the Department of Pediatrics at the University of Washington looked at sleep patterns and TV usage reported by parents for 612 children ages 3 to 5.
The 59 children who had a TV in their bedroom were more likely to report sleeping problems, including trouble falling asleep, nightmares, and trouble waking in the morning. Eight percent reportedly showed tiredness during the day compared with 1 percent of the rest of the group.
Putting aside the problems of selection bias (if your parent puts a TV in your room at age 3, maybe you had problems to begin with) and reverse causality (maybe kids with real trouble sleeping are given a TV so they will quiet down), I wonder what the purpose is of this study. Do we think that the types of parents who put a TV in their kids’ room will be swayed by these results?
Last, since this is the Globe’s “Be Well” column we get the following caveat:
CAUTIONS: The study relied on data reported by parents, so screen time may have been underreported.
That’s a problem, for sure; but the selection problem seems more problematic, as does the super-confident headline.