Andrew Gelman picks apart the study behind a front page New York Times story. The story also got a link (and an uncritical link at that) on Freakonomics.
Gelman seems flabbergasted at what he sees as a hyperbolic claim resulting from a poor design filtered through poor journalism. I’m surprised at the surprise. As one commentator “Mark” points out at Gelman’s blog,
Andrew, this is my area of research (public health), and I don’t think you’re missing anything, and I’m not the least bit surprised that it resulted in big headlines in the NYT. This happens ALL THE TIME. Recall the recent results from Harvard regarding red meat and cancer: http://www.nytimes.com/2012/03/13/health/research/red-meat-linked-to-cancer-and-heart-disease.html?_r=1. OK, this one wasn’t exactly front page in NYT, but still they were hyping another similarly hopelessly flawed study. And, it’s not too hard to find many more examples in the NYT, some of which certainly made the front page. See John Iaonnidis.
Another commentator “Jonathan” suggests, “Anyone know of any studies that look at impacts of new studies on behavior changes. It would also be interesting the difference before and after the introduction of the internet. If that makes sense.” I second that–can we get data on gym memberships and match it to NYT circulation??
Throughout the day, we’re bombarded with information about risk. Foods, medications, and lifestyle habits either reduce or increase our risk of a certain disease, say many studies.
That information can be valuable. But it is often misinterpreted by the media, consumers, and even doctors.
In the case of the chocolate study, the researchers never answered a key question: How likely is a person who doesn’t eat chocolate, or eats very little, to suffer from heart disease or stroke?
Without those numbers, consumers are in a situation that’s a little like having a 50 percent-off coupon without knowing the prices of anything in the store. You don’t want to use the coupon on a $1 item, saving only 50 cents, if you can use it on a $50 item and save $25 instead – but without those original prices, you’re flying blind.
The same is true for medical studies. In this story, we trace through a few hypothetical and real-life examples to show you how to better understand the concept of risk in medical studies aimed at consumers – starting with the chocolate study.
From this article in the Boston Globe. It mostly talks about interpreting “risk” studies, and it seems (from the correction notice and the hubbub in the comments) that the author actually messed up on some key numbers; but overall it makes a good point about the hype of science journalism and gives readers some rules of thumb to see through it.
I have no idea if this will be a regular feature–let’s hope so, since the Globe is frequently guilty of the very sort of reporting that’s criticized here.
I read something in the Boston Globe this morning that called to mind that cartoon again. Via Bloomberg News, the article reported the “results” of a “study” showing that working long hours increases risk of heart disease. Here’s the procedure (maybe a red flag that it appears as the last paragraph in the story).
The research followed 7,095 civil service workers in London who were ages 39 to 62 at the start of the trial. They were screened for heart disease every five years. The study found that 192 people developed heart disease over 12.3 years of follow up. Those who worked 10 hours a day had a 45 percent higher risk of heart disease than those who worked seven to eight hours.
Um, where to begin? Selection bias perhaps? Non-randomized “trial”? I’ve read stories in the fashion section of the New York Times that go into more detail about research flaws than this. (I seriously have, before I started this blog.)
It’s funny what journalists can get away with, even on the “objective news” pages of the paper. You can’t outright lie or state your opinion, but you can get away with crappy interpretations of a crappy study. Maybe because there are no clear political implications, so there’s no worry about “bias”?
What also stands out in this article is that the study is presented as memoryless. That is, there is no reference to any previous study on this topic, which has probably been studied hundreds of times. How does this new piece of information fit in to the broader body of research? That’s nowhere to be found here.
A story like this deserves its own cartoon.