A paper recently appeared in the open access peer-reviewed journal PeerJ titled Lung cancer incidence decreases with elevation: evidence for oxygen as an inhaled carcinogen. It makes the hard-to-believe claim that the oxygen we breathe is a cause of lung cancer. This is pretty far from my own expertise. Can I trust this result? Reading the paper I find it seems to have done the sort of statistical tests I would expect, checking for a long list of other possible factors including such things as radon and UV exposure (as well as of course cigarette smoking). Maybe they missed an important factor (cosmic ray exposure? levels of some other carcinogens that vary with elevation?) but the analysis looks pretty reasonable to me.
I know nothing about the authors or their previous publication history. Their institutions are reasonably well-known (University of Pennsylvania, UC San Francisco). In the paper they declare "no funding for this work" which is a little suspicious - most good science is done with some sort of external funding. What about the journal? PeerJ is a new entrant (launched in February 2013), focused on Biological and Medical sciences, with a very different business model from traditional journals. But it claims to be applying rigorous peer review. I have some first-hand knowledge of that as a co-author (among many) on a paper being considered for publication in that journal - we had a pretty thorough first round of review by external referees.
What about external commentary on this oxygen-causes-cancer paper? There was quite wide positive coverage presumably following a university or journal press release, for example this report from EurekAlert!. The only negative response I could find was this one from Fiona Osgun at Cancer Research UK which concludes "this paper is an interesting read certainly, but definitely doesn’t tell us that oxygen causes lung cancer" - her primary complaint seems to be they likely didn't take smoking fully into account due to issues with the years for which data was acquired in the study.
Who to trust here? Unfortunately it's very unclear at the moment. To me this doesn't look like either the journal or authors were "behaving badly" - at worst they might have made some honest mistake that will be understood as this research is followed up on. At best - well, maybe we now understand another source of cancer risk. But maybe I'm way off base - like I said, this isn't a field I'm at all familiar with!
A second recent paper in a similar vein is Solar activity at birth predicted infant survival and women's fertility in historical Norway published in the quite-prestigious 350-year-old Proceedings of the Royal Society B (impact factor 5.683). This is again a statistical study of correlations, this time between solar activity (sunspot counts) at date of birth and various metrics of survival and fertility. Again the authors seem to have accounted for a variety of possibly confounding factors - the science looks reasonable to me (far from an expert in this field). The authors themselves are from the "Norwegian University of Science and Technology" in Trondheim which seems like a respectable place. The journal is clearly among the most well-regarded in the world. There's no sign of bad behavior by either author or journal here.
So what about reactions? The paper received pretty wide-spread media coverage, again essentially parroting a (positive) press release, with occasional quotes from other scientists expressing doubts about the mechanism. The only lengthy critical response I could find was this one by Richard Telford, and his critique of the UV mechanism appears quite devastating. Norwegians have very little UV exposure in the first place, and cloud and regional variability is much more significant than the solar cycle effect. His suggested explanations for the paper (assuming the authors and journal did not behave badly):
The first is that some property of the data causes the statistical methods (which look good) to fail unexpectedly. The second is chance – one of the 5% of results that appear to be statistically significant at the p = 0.05 level when the null hypothesis is true.
There is a third possibility - that some other mechanism than UV variation is mediated by the solar cycle and has an impact on life expectancy etc in Norway (and perhaps elsewhere).
So do I trust this one? Given the strength of the one critique, right now I feel most likely the authors have made some sort of honest mistake. But again I'm way outside my own expertise, it is really hard to know what's right.
In both cases this state of uncertainty isn't bad - it's quite normal for science at the cutting edge to be ambiguous and uncertain. Scientific results that stand the test of time require confirmation by replication of studies like these under different conditions, testing the explanatory hypothesis via all the experimental and theoretical implications one can. When multiple lines of evidence all support the same picture of reality, then one can be fairly certain it is right. When we just have one line of evidence, in cases like this, it's fine and appropriate to be skeptical. Over time the truth should be sorted out.