Peer review failures - another example?

I've discussed scientific peer review here before (one more - I really should get those category tags working!) but RealClimate's discussion of Lindzen and Choi (2009) highlights a particular example of recent peer review standards in sufficient detail to see some of the problems and choices before us where the consequences actually matter.

I'm interested more than usual in this case because I actually read through the paper in some detail a few months ago, and found a number of things about what they were doing that just didn't make sense to me. I commented to that effect on an earlier RealClimate post on this paper, a comment they highlighted in starting the new post on the peer review process. Basically, these authors in a climate paper claimed a rather extraordinary result, that they could determine the sensitivity of Earth's climate system from observational data on certain parameters in the tropics, and that the resulting number was much lower than the widespread consensus as summarized from many different lines of evidence for example in this review paper (Knutti and Hegerl, 2008).

The journal in question is Geophysical Research Letters, which apparently has an emphasis on publishing things quickly, rather than on making sure they are correct. From the RealClimate post it looks like the Lindzen-Choi paper received "extremely favorable" reviews. Meanwhile, a critical comment was rejected. A separate paper criticizing the original has been accepted, as a standalone article rather than as a comment. Did the reviewers actually spend any significant time trying to read and understand the article? Problems should have been clear - the ones I noticed were perhaps the most obvious, but there were many others as have been pointed out by people from across the spectrum, including noted "skeptic" Roy Spencer.

Is the problem a fundamental issue with standards at this particular journal, GRL, or with geophysics in general? I'm pretty sure reviewers in physics, where I've worked both as author and reviewer, are at least a little more dedicated to trying to find out whether an article actually makes sense or not. Certainly there are some fields (mathematics, economics) where the review process is far more stringent, at least preventing things that are obviously wrong from getting into print (or online journals now).

Or is the real problem with blind peer review in the first place, and could this be fixed by a more open system? The purpose of journals really is to act as gate-keepers, imposers of certain standards on the scientists who publish in them. If your science seems basically up to snuff, you get in, if not, try somewhere else. By opening things up, you now have two sets of gates - one that is almost wide open (submitted articles) and the second where standards apply (accepted ones) - is that distinction clear enough to allow the purpose of journals still to be fulfilled? What happens to rejected articles in such a system - they appear over and over again in between the two sets of gates, never making it to "accepted" status but reproduced all over the place anyway? What sort of accountability is needed for comments and reviews that appear on an open system - should they still be anonymous but from people identified to the journal in some way? Or public comments from identified people only? How do you address malicious or irrelevant comments that may just waste the authors' and editors' time, if every comment has to be responded to?

With this specific example in mind, it seems clear an open process would have been of benefit to all. The paper was already available online (in some form) before publication anyway. It had aroused considerable interest for its contrary-to-consensus conclusions. A number of people early on criticized the methods. A formalized, structured, open review where the authors have to respond to comments would have quickly identified the serious problems with the paper, and should have forced the authors to do some more work to justify their conclusions.

But perhaps this is an atypical example. More thought and discussion on the subject is still needed, I am sure.

Update - the following comment from Philip Machanick in that RealClimate thread seemed insightful in addressing some of my concerns in an open system (with the experience already proved in internet RFC context):

The way internet standards are developed is something the scientific community could learn from. The first stage is an Internet Draft available for general review but not formally considered to be published. An I-D can migrate to a Request for Comment RFC with further review and editorial correction, before it becomes a standard. Unlike with academic papers, any interested party can comment on an I-D. An I-D is withdrawn from the I-D site, but an RFC is an archival publication, and not altered once published.

This process ensures wide checking before a document becomes a standard. The downside is that embarrassing errors may see the light of day in an I-D. An advantage is that you can get your ideas out quickly long before they have been formally reviewed, to discourage others from scooping you. And of course you are not relying on a selected and limited pool of reviewers.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Rejected papers go to arXiv.

Rejected papers go to arXiv. That was simple.

A separate paper criticizing

A separate paper criticizing the original has been accepted, as a standalone article rather than as a comment. Did the reviewers actually spend any significant time trying to read and understand the article? Problems should have been clear - the ones I noticed were perhaps the most obvious.

By opening things up, you now

By opening things up, you now have two sets of gates one that is almost wide open and the second where standards apply is that distinction clear enough to allow the purpose of journals still to be fulfilled. What happens to rejected articles in such a system they appear over and over again in between the two sets of gates, never making it to accepted status but reproduced all over the place anyway.

I commented to that effect on

I commented to that effect on an earlier RealClimate post on this paper, a comment they highlighted in starting the new post on the peer review process. Basically, these authors in a climate paper claimed a rather extraordinary result, that they could determine the sensitivity of Earth's climate system from observational data on certain parameters in the tropics, and that the resulting number was much lower than the widespread consensus.