While there are many troubling consequences of the global warming we are imposing on ourselves, one of the most inexorable is the increase in ocean volume that leads to rising seas around the world. These changes are already bringing coastal flooding with increasing frequency, loss of coastal land, and related trouble such as salt-water incursion into groundwater, and much worse is in store. Stefan Rahmstorf at RealClimate recently reviewed several recent papers on sea level rise and among other concerns noted that even if we manage to limit warming to just 2 degrees Celsius, the expected eventual sea-level rise is 25 meters (82 feet) over thousands of years. Part of this increase in volume comes from the melting of land-bound ice: mountain and polar glaciers and the major icesheets of Greenland and Antarctica. The other part of the increase comes from the slow steady heating of the ocean as a whole - warmer water occupies more volume than colder water. As long as our planet is subject to climate forcings that bring it away from the pre-industrial energy balance, the ocean will continue warming even if the surface temperature starts to cool again once we have brought CO2 under control. Even while the surface of Earth has warmed by about 1 degree C since pre-industrial times so far, the temperature increase in the deep ocean has been measured at only one or two hundredths of a degree; eventually the deep ocean should catch up to the surface so we will see on the order of 100 times present sea level rise from that cause - unless we can return surface temperatures to pre-industrial levels first.
Sea level rise (SLR) has accelerated over the past few centuries - see this post by Tamino on analysis of satellite and tide-gauge measures. Presently sea level is rising at close to 3 mm/year - 1 foot per century. The IPCC has warned of up to 1 meter of sea level rise by 2100, implying a rate increasing to 10 mm/year or more. James Hansen in a recent paper argued for the possibility of several meters in 50 to 100 years due to a major acceleration in ice-sheet loss: this would imply a rate on the order of 30-50 mm/year of SLR. Even at 3 mm/year the inexorable rise is troubling; 10 times that amount would be devastating. Is there anything we can do about it? What does a look at some of the numbers in this problem tell us?
It's been almost 3 years since my last post about my 2005 Toyota Prius so I was thinking another update wouldn't hurt. The car now has 144,000 miles on it, and still runs fine as far as my daily use is concerned. Maintenance costs have not been high at all; I bring it to the dealer two to three times a year for oil changes and other routine maintenance. I haven't been quite as diligent as I was for the middle years there tracking receipts for gas. In particular there was a big gap with lost receipts in the summer of 2013 (reason lost in the mists of time), and a smaller gap in November & December 2015 when my son was driving it a lot and didn't give me all the receipts for gas he bought. But there's enough coverage that I can still see pretty accurately how it is performing with regards to mileage. First the full-year mileage averages (taken by dividing miles driven in the year by the number of gallons of gas purchased):
The American Physical Society has been proceeding with a review of its statement on climate change. Judith Curry and Eli Rabett, among others, have posted the latest draft from the APS Panel on Public Affairs, which has been made available to the APS membership for commentary (each member is allowed one comment of limited length) before being given final edits and provided for a vote of approval to the council. I include below the draft statement and my comment.
I've been working on a post roughly on this topic for a while, and it's been getting long and meandering - so I've split it up. I'm expecting the following to be part one of several.
Scholarly communication as it is now is far from ideal. In what ways could it change to make things better? One way I have been thinking about it recently is in the context of Stewart Brand's "pace layers diagram"; Long Now Foundation just posted an interesting audio recording of a discussion about its origin and uses. The idea is that systems are composed of layers with differing rates of change:
The fast layers at the top are where innovation and experimentation happens, while the slow layers at the bottom bring stability. Changing things too quickly at the bottom isn't safe; forcing the top to slow down is equally harmful. Even our nature as human beings changes slowly over time, but for the most part that layer is given. Let's examine the other layers of the scholarly communication system a little:
A few days back one of my favorite writers, David Roberts at Grist posted an article titled We can solve climate change, but it won’t be cheap or easy. This was based on an article in WIREs Climate Change, A critical review of global decarbonization scenarios: what do they tell us about feasibility? by a group of authors including Jesse Jenkins, a young energy analyst I've also been following for a while. Andy Revkin at dot Earth picked up on Roberts' post as a positive sign, while Joe Romm at Climate Progress had a rather pointed critique.
I don't want to get into the details of the report or most of these arguments here; there was just one point that I thought was very odd when highlighted in Roberts' post, and exemplified by the following figure from the report:
Roberts quotes the authors as follows on this:
These studies also envision a normalized build-out of generating capacity in the range of 5-23 GW/year/$T of GDP, or 1.4-15 times faster than historical experience. These unprecedented rates are a consequence of both the relatively low-capacity factors of wind and solar as well as increased demand due to the assumed widespread electrification of the economy.
What struck me was this measure of "build-out of generating capacity" in GW/year/$T of GDP. The graph suggests past experience on this measure should be a guide to the future, and therefore the Jacobson, WWF, etc. scenarios are unlikely to be feasible. But why did the authors pick this measure?
My last few posts have been on some examples of dubious scientific publications. Some publishers are bad actors. Some authors are naively over-confident and have found naive editors or publishers to match. Sometimes it can be hard to tell. The last case I'm going to look at here is perhaps the worst situation - where the authors are clearly behaving badly, and somehow made it through some form of peer review. This is the sort of thing that gets reported regularly on Retraction Watch, and also similar to the Gerlich and Tscheuschner case except that the journal in question is slightly more prestigious (G&T's journal, IJMP-B, has an impact factor of less than 0.5). And once again the topic is climate change.
The paper this time is Why models run hot: results from an irreducibly simple climate model by Christopher Monckton, Willie W.-H. Soon, David R. Legates, and William M. Briggs, published in Science Bulletin by Science China Press and Springer-Verlag. Given that Springer is about to merge with Nature, the break-down of reasonable peer review in this case indirectly reflects badly on one of the most prestigious journal brands in all of Science (Springer is of course also highly regarded).
Monckton and friends' paper has been widely criticized already by ... and Then There's Physics, Jan Perlwitz in two articles and from Roz Pidcock at the Carbon Brief who quotes various other scientists on the topic. Since the essential argument is barely changed from Monckton's 2008 Physics & Society (P&S) article that I found full of errors I thought it deserved a bit of post-publication attention from me also. It really is astonishing that this work was approved by an editor for what looks like a reasonable scientific journal.
At first sight this article isn't as obviously nutty as some of those I've discussed here previously - the graphics and tables seem to be well designed, the reference section looks fairly substantive. The mathematics is once again pure algebra with not a sign of an understanding of the calculus invented by Newton and Leibniz a few hundred years back - and we'll get back to that. But other than the overly simplistic math, the paper may not strike the experienced editor immediately as absurd.
In my last couple of posts I've been sharing some observations on the state of science communication these days, particularly problems with predatory publishers, overzealous publicity for ambiguous research, and the difficulty for an outsider in understanding what to trust in what's published in science these days. Bottom line: it's become much easier to communicate, but harder to know whether what's being communicated is worth paying attention to.
In this post I present two more cases of dubious publication - as these are in physics I'm rather more certain they are wrong. These have received a rather mixed collection of "post publication peer review" but in a fashion that I believe would leave the non-expert quite unaware they are not useful contributions to science.
Our first case study here is from the mega-journal PLOS ONE - "Implications of an absolute simultaneity theory for cosmology and universe acceleration" by Edward T. Kipreos, of the University of Georgia. PLOS ONE is an open access journal started in 2006 (2012 impact factor: 3.73); it is the world's largest journal by number of papers published, publishing original research across many fields. Published articles are reviewed for technical validity but as long as they pass that, it doesn't matter how important the reviewers or editors feel the article is. PLOS ONE doesn't publish many physics articles on the whole, but it has done a few along the lines of the above, for example this article that seeks to rework general relativity - probably equally dubious but one I have less background to assess.
What are we to make of Dr. Kipreos' article? Univeristy of Georgia seemed rather pleased with the publication. It also notes he is a molecular geneticist, not a physicist. Hmmm. Some version of the press release (with no comment from any actual physicists) spread quickly around the internet; the article lists 8522 "views" on the PLOS platform right now. Despite the promise of "post publication peer review", there is only one incomprehensible comment on PLOS ONE itself, and nothing in PubMed Commons or PubPeer.
I did track down two apparently knowledgeable critiques in blog form from physicists Brian Koberlein and Matthew R. Francis. There is no sign of any response or acknowledgement from Dr. Kipreos of the problems they raise.
As Dr. Francis notes, articles "proving" Einstein was wrong are extremely popular among the less-well-informed. I recall when I was a young man, probably about 12 after having read some account of special relativiely probably by Asimov, there was something that occurred to me that I thought I saw clearly, everybody must have overlooked it! I'd be famous, get a Nobel, etc. etc. When I tried explaining to my Dad (a chemist) he rather patiently suggested maybe I needed to study a bit more about it. Sure enough, when I understood it more, my insight had already been long accounted for.
Sadly, there are a few people who never seem to grow out of that stage of certainty that they've discovered something simple that others have missed. Physics journals routinely receive these "crackpot" papers - I've heard something like one in five papers submitted to journals that cover gravitation and relativistic physics are in that category. Their editors can spot these papers a mile away.
A paper recently appeared in the open access peer-reviewed journal PeerJ titled Lung cancer incidence decreases with elevation: evidence for oxygen as an inhaled carcinogen. It makes the hard-to-believe claim that the oxygen we breathe is a cause of lung cancer. This is pretty far from my own expertise. Can I trust this result? Reading the paper I find it seems to have done the sort of statistical tests I would expect, checking for a long list of other possible factors including such things as radon and UV exposure (as well as of course cigarette smoking). Maybe they missed an important factor (cosmic ray exposure? levels of some other carcinogens that vary with elevation?) but the analysis looks pretty reasonable to me.
I know nothing about the authors or their previous publication history. Their institutions are reasonably well-known (University of Pennsylvania, UC San Francisco). In the paper they declare "no funding for this work" which is a little suspicious - most good science is done with some sort of external funding. What about the journal? PeerJ is a new entrant (launched in February 2013), focused on Biological and Medical sciences, with a very different business model from traditional journals. But it claims to be applying rigorous peer review. I have some first-hand knowledge of that as a co-author (among many) on a paper being considered for publication in that journal - we had a pretty thorough first round of review by external referees.
What about external commentary on this oxygen-causes-cancer paper? There was quite wide positive coverage presumably following a university or journal press release, for example this report from EurekAlert!. The only negative response I could find was this one from Fiona Osgun at Cancer Research UK which concludes "this paper is an interesting read certainly, but definitely doesn’t tell us that oxygen causes lung cancer" - her primary complaint seems to be they likely didn't take smoking fully into account due to issues with the years for which data was acquired in the study.
Who to trust here? Unfortunately it's very unclear at the moment. To me this doesn't look like either the journal or authors were "behaving badly" - at worst they might have made some honest mistake that will be understood as this research is followed up on. At best - well, maybe we now understand another source of cancer risk. But maybe I'm way off base - like I said, this isn't a field I'm at all familiar with!
A second recent paper in a similar vein is Solar activity at birth predicted infant survival and women's fertility in historical Norway published in the quite-prestigious 350-year-old Proceedings of the Royal Society B (impact factor 5.683). This is again a statistical study of correlations, this time between solar activity (sunspot counts) at date of birth and various metrics of survival and fertility. Again the authors seem to have accounted for a variety of possibly confounding factors - the science looks reasonable to me (far from an expert in this field). The authors themselves are from the "Norwegian University of Science and Technology" in Trondheim which seems like a respectable place. The journal is clearly among the most well-regarded in the world. There's no sign of bad behavior by either author or journal here.
So what about reactions? The paper received pretty wide-spread media coverage, again essentially parroting a (positive) press release, with occasional quotes from other scientists expressing doubts about the mechanism. The only lengthy critical response I could find was this one by Richard Telford, and his critique of the UV mechanism appears quite devastating. Norwegians have very little UV exposure in the first place, and cloud and regional variability is much more significant than the solar cycle effect. His suggested explanations for the paper (assuming the authors and journal did not behave badly):
The first is that some property of the data causes the statistical methods (which look good) to fail unexpectedly. The second is chance – one of the 5% of results that appear to be statistically significant at the p = 0.05 level when the null hypothesis is true.
There is a third possibility - that some other mechanism than UV variation is mediated by the solar cycle and has an impact on life expectancy etc in Norway (and perhaps elsewhere).
So do I trust this one? Given the strength of the one critique, right now I feel most likely the authors have made some sort of honest mistake. But again I'm way outside my own expertise, it is really hard to know what's right.
In both cases this state of uncertainty isn't bad - it's quite normal for science at the cutting edge to be ambiguous and uncertain. Scientific results that stand the test of time require confirmation by replication of studies like these under different conditions, testing the explanatory hypothesis via all the experimental and theoretical implications one can. When multiple lines of evidence all support the same picture of reality, then one can be fairly certain it is right. When we just have one line of evidence, in cases like this, it's fine and appropriate to be skeptical. Over time the truth should be sorted out.
I thought I ought to share a recent incoming email with the world... I know this is a widespread problem, but the coincidence of this appearing just as I was thinking about Gerlich, Tscheuschner and Monckton made posting this one a little irresistible :)
Invitation to Propose a Special Issue and be the Lead Guest Editor
SciencePG (http://www.sciencepublishinggroup.com) is one of the worldwide publishers who is dedicated to promoting exchange of knowledge and advancing technological innovation. Special Issue is a part of SciencePG and plays an important role of its rapid development. Acquiring that you have once published a paper titled COMMENT ON "FALSIFICATION OF THE ATMOSPHERIC CO2 GREENHOUSE EFFECTS WITHIN THE FRAME OF PHYSICS" on the theme of Greenhouse effect; climate; thermodynamics in INTERNATIONAL JOURNAL OF MODERN PHYSICS B, SciencePG believes you must have great achievements in your research field and sincerely invites you to propose a Special Issue and be its Lead Guest Editor.
No, I'm not talking about how last year was the hottest (globally) on record. It's been almost two years since my last post here, but I'm intending to end the silence. I have actually been doing a lot of reading and listening of one form or another, along with a few comments here and there, especially on twitter. One reason I haven't felt an urgent need to write has been a couple of excellent new entrants in the climate blogosphere:
It's been another two and a half years since I posted about my hybrid car - how has it been doing now that it's getting past 100,000 miles and the original warranty period is coming to an end real soon (8 years on the electrical system)? Here's the same graph extended to another 30,000 miles of driving:
I've noted before:
Loops are at the heart of nonlinearity and the complexity that makes the world interesting, but loops are also dangerous and eagerly avoided by the analyst. Circular reasoning is rightly condemned within pure logic and feedback loops are the bane of every sound system, but loops of self-consistency seem naturally at the heart of how we make sense of the world, and what true understanding is. I think one of Hofstadter's points [in "I Am a Strange Loop"] is that it is only through level-crossing loops that you can break out of simple tautology into true understanding. Perhaps this is at the heart of the "inductive" reasoning that Popper persistently attacks in his "Logic of Scientific Discovery" [...] Economic loops are central to growth; the familiar chicken-and-egg loop is at the heart of life itself.
There's been a bunch of discussion lately about comments on blogs that has made me even more convinced they aren't worth it here - so I've turned off all commenting by anonymous users.
When the APS Forum on Physics and Society publication "Physics and Society" published an article by one Wallace M. Manheimer suggesting with little evidence that renewable energy was useless, I felt obligated to respond with some more complete information. They published my letter in the latest issue; I've reprinted it below.
Wallace M. Manheimer's article  on energy choices in the April 2012 issue makes a number of important points, but also goes wrong on many fronts, and I hope Physics & Society will allow at least some correction of these misstatements.
To start at the end, Manheimer asserts that "one cannot talk about climate and ignore energy supply. Yet, these organizations [AIP and APS] have done just that." One need only read the same issue of Physics and Society to know that claim is false - the book review by Paul P. Craig  mentions "the first APS energy study [...] in 1973", which has been followed by many others. Manheimer himself cites the recent APS "Energy Efficiency Report" - and then appears to dismiss it as parochial. This is ironic since he earlier claims that cutting US energy use would be "worse because distances are much greater in the United States, it is colder here, and we have responsibilities as a major world power" Manheimer's argument pertains to Italy, but in general technology developments allowing efficiency gains in the US apply equally well or better elsewhere.
I had an odd call at work the other day. My cell phone rang and I noticed an unfamiliar number with a Rochester, NY area code.
Rochester: "Hello, is this the bishop?"
Me: "Uh, yes"
I waved the colleague I'd been talking with out of my office and closed the door as you never know when a call like this will involve confidential matters.
Rochester: "I'm really sorry to disturb you but they gave me your number and told me to call."
Me: "Oh, no problem, what's this about?"
It's mid-February 2012 and the various groups reporting global surface temperature data have all posted numbers that are on the low side compared to thus far in the 21st century, though still several tenths of a degree above the 20th century average. NASA's Goddard Institute for Space Studies (GISS) in particular, which I've been following and posting guesses/predictions on for several years now, has just posted a January 2012 number of 0.36 as the global land + sea temperature anomaly, 0.15 degrees C below the 2011 annual average of 0.51 and 0.27 degrees below the 2010 (and 2005) record of 0.63 degrees.
So naturally, those who deny the scientific evidence for human-caused warming have been going on about cooling, how warming has stopped for many years, and so forth. Though not as adamantly as they were back in 2008 - that cooling proved very brief; perhaps they have learned to be more cautious. If I was in mind to go with simple linear trends and didn't believe the science on warming myself, the January 2012 number of 0.36 sounds like a pretty good bet for the whole of 2012 at this point, maybe it should be even lower.
But I understand the science and know quite clearly warming will continue (with some minor ups and downs) until we stop with the fossil carbon burning - and it may take a while after that to actually stop. I've plugged some numbers in and come up with a science-based prediction for 2012: 0.65 C (plus or minus about 0.07). That middle value would break all previous records. Even on the low end it would put 2012 at the 5th or 6th warmest year on record (just slightly cooler than the remarkable early warm year of 1998 in the GISS index). Why am I so sure 2012 will be so warm?
The following is my review of The Hockey Stick and the Climate Wars, by Michael Mann - I read the Kindle edition (and sent him a few corrections for typos here and there).
As I was reading this first-person account of some of the most maddening episodes in modern times, I wondered to myself - what audience is this written for? How will some of the different players and bystanders react? Is Dr. Mann bringing on himself here yet another round of baseless attack from those who side with the most powerful entities human civilization has ever known?
I have no doubt the attacks will continue to intensify. If you hear about this book from some of the people, foundations and corporations that Mann names in it, please remember they have a very strong agenda: they don't want you to read it. If you find yourself sympathizing with one of these powerful entities, that means you need to read it, more than anybody else.
Having recently acquired (through work) an iPad, I've downloaded a lot of free or inexpensive e-books, and made some attempt at reading a few of them on the system. It's not quite like a physical book, but it's not a bad experience. There have been some formatting or editing issues (presumably because the originals were scanned and turned into text via OCR) - but sometimes you see copy-editing errors in a physical book as well. It means it's slightly less ideal than the "real thing", but I don't think I've running into anything that prevents communication of the author's intent.
Among the books are many that I've long had some desire to read, but never got around to looking up in a store or library - it's easier (or perhaps just implies less commitment) to download them than to go out searching. One of these was the original 1935 book by economist John Maynard Keynes, "The General Theory of Employment, Interest and Money", which led to the so-called Keynesian philosophy in US monetary and economic policy in the mid 20th century. Keynes wrote in the context of the Great Depression, which the world was experiencing at that time, and while much of what he says can be hard to understand, it surely has some relevance to the modern "Great Recession" the world has been going through for the last few years. Below I'll quote some parts of the text I found particularly enlightening, without a lot of comment from me on the matter since to say anything knowledgeable on this would require more familiarity with other economic schools of thought than I possess.
2010 and 2011 may not have been unusual in terms of the number of energy-related disasters, but I suspect they have at least been unusual in terms of the quantity of headline-grabbing material and TV news attention, and the ongoing disaster stemming from the earthquake and tsunami in Japan is only the most dramatic of them. With the 1-year anniversaries of the Upper Big Branch and Deepwater Horizon disasters running past along with the 25th anniversary of Chernobyl, I felt a need for some sort of quantitative comparison of these various events...
An explosion or earthquake or other disaster of that sort involves the almost instantaneous release of a large quantity of energy. For earthquakes we have a convenient measure in the Richter scale, which measures the shaking amplitude. The Richter scale increases logarithmically, so that an increase in magnitude by 1 means a shaking amplitude 10 times as large. The quantity of energy involved scales as the 3/2 power of that amplitude, so 2 magnitudes on the Richter scale corresponds to an increase in energy release by a factor of 1000. Converting energy to standard metric notation in terms of joules (1 J = 1 kg m^2/s^2), the Richter scale magnitudes come to:
Magnitude 3: 2 GJ (2x10^9 J)
Magnitude 5: 2 TJ (2x10^12 J)
Magnitude 7: 2 PJ (2x10^15 J)
Magnitude 9: 2 EJ (2x10^18 J)
Nuclear explosions are typically measured in units of kilotons of TNT, where 1 kt TNT = 4.2 TJ, i.e. a 1 kiloton explosion should be about double the energy release of a magnitude-5 earthquake, and a 1 MT (megaton) explosion around double the energy release of a magnitude-7 earthquake.
Also worth thinking about in comparison is the non-explosive use of energy, as it runs through the natural world and as we use it for our own purposes. Since a year consists of just over 3x10^7 seconds, a 1 GW power plant over the course of a year produces 3x10^16 J or 30 PJ of electrical energy. That's about 7 times the energy release of the 1 MT explosion, about 15 times the energy release of a magnitude-7 earthquake. That energy release is spread over tens of millions of seconds, not just the few seconds of an explosion, but it's good to remember it is a large quantity of energy.
Human society currently uses about 15 TW of primary energy, or 450 EJ per year. That's over 200 magnitude-9 earthquakes, almost 1 per day. That's a lot of energy.
Earth receives energy from our Sun at a rate of about 174 PW. In a year that's about 5x10^24 J, 5 YJ (yottajoules) or 5 million EJ. That's a magnitude-9 earthquake worth of energy every 12 seconds! Luckily it's spread out over the whole (day-lit) surface of the Earth, so we don't normally experience the magnitude of that energy flow in any dramatic fashion. Still, it's worth remembering how natural energy scales like this tend to dwarf whatever humans do.
So, how do our recent collection of energy-related explosions and disasters compare?
A few months ago Tamino at Open Mind posted a fascinating analysis of warming obtained by fitting the various observational temperature series to a linear combination of El Nino, volcano, and solar cycle variations (using sun spots as a proxy for the latter), plus an underlying trend, allowing for some offsets in time between the causative series and the temperature. Year to year global average temperatures fluctuate significantly, by several tenths of a degree. Taking into account these "exogenous" factors, however, greatly reduced the level of variation. Not only does this more clearly show the underlying trend, once the "exogenous" components are removed, but it occurred to me this also allows prediction of future temperatures with considerably more confidence than the usual guessing (though I've done well with that in the past), at least for a short period into the future.
See below for a detailed discussion of what I've done with Tamino's model, for the GISS global surface temperature series. In brief, however, I present the results of two slightly different models of the future, first with no El Nino contribution beyond mid-2011, and second with a pure 5-year-cycle El Nino (starting from zero in positive phase) from mid-2011 on.
|Year||Model 1 prediction for GISS Jan-Dec
global average temperature anomaly
|Model 2 prediction|
While there's some variation in future years, the final average temperature for 2011 should be close to 0.58 (similar to the temperatures in 2009). Temperatures in 2012 are likely to be much warmer - at least breaking the record of 0.63 set in 2005 and 2010, possibly (model 2) by as much as 0.15 degrees. With the continued waxing of the solar cycle and continued increases from CO2 effects, however warm 2012 is, 2013 should be even warmer (unless we get a big volcano or another strong La Nina then).