Wednesday, December 21, 2011

Pre-publication Peer Review and Lazy Science Reporting

So it looks like, again, a not so serious paper makes it through "peer-review" (Fukushima: Alarmist Claim? Obscure Medical Journal? Proceed With Caution). Apparently, the lesson from all this is that you now have to pay attention to a journal's ranking where that paper appeared. Let us recall the Lancet study on MMR which, taken at face value, was responsible for hundred of thousands of parents shedding their kid's MMR vaccination with disastrous consequences. Instead, I propose we should avoid "science reporting" altogether or avoid reading lazy journalists:

How do you figure these out? A good way of sighting these offending reporters is their need to balance the subjects out with other studies, thereby framing the works as from opposing sides:

....As it turns out, the authors, Joseph Mangano and Janette Sherman, published a version of this study in the political newsletter Counterpunch, where it was quickly criticized. The critics charged that the authors had cherry-picked federal data on infant deaths so they would spike around the time of the Fukushima disaster. Passions over nuclear safety further muddied the debate: both researchers and critics had activist baggage, with the researchers characterized as anti-nuke and the critics as pro-nuke.

Here is the thing that gets to me. It doesn't matter whether you debunk it, it's all part of a theater play where two sides are fighting in some epic battle. It doesn't matter if the data is cherry picked (let's not even talk about the quite common misunderstanding that goes with statistics and it's attendant p-values). A party that shows the study to be wrong gets to be labeled in the opposing team. The investigating reporter cannot read the blog entry, it's too difficult, too many numbers at play. Let us continue reading that post:
As Scientific American's Michael Moyer writes: "The authors appeared to start from a conclusion—babies are dying because of Fukushima radiation—and work backwards, torturing the data to fit their claims"
And the only way to learn the truth is through the trustworthiness of another journalist. There is a nice herd mentality at play here: While this particular subject is godsend for journalists because the issue at play has been well polarized over the years, what happens when one of the two sides does not exist ? 
So how did such a seemingly flawed study wind up in a peer-reviewed journal?

Part of the answer at the end of the post seems to be to check the journal's ranking, wow. No, the only way to figure out if some science is right is to check the scientific argument, not trust pre-publication peer review or a journal ranking. The reason pre-publication peer review seems to exist is solely to give a compass to lazy journalists and science reporters who do not know what science is: Another good reason to make it go away. Post publication peer-review is the way to go (and yes I am making the statement that if the physicist and media darling Brian Cox expressed the view that pre-publication peer review is the only way to do Science then he does not understand Science).

Some good data about peer-review can be found here.



No comments:

Printfriendly