The Monckton equation

My last few posts have been on some examples of dubious scientific publications. Some publishers are bad actors. Some authors are naively over-confident and have found naive editors or publishers to match. Sometimes it can be hard to tell. The last case I'm going to look at here is perhaps the worst situation - where the authors are clearly behaving badly, and somehow made it through some form of peer review. This is the sort of thing that gets reported regularly on Retraction Watch, and also similar to the Gerlich and Tscheuschner case except that the journal in question is slightly more prestigious (G&T's journal, IJMP-B, has an impact factor of less than 0.5). And once again the topic is climate change.

The paper this time is Why models run hot: results from an irreducibly simple climate model by Christopher Monckton, Willie W.-H. Soon, David R. Legates, and William M. Briggs, published in Science Bulletin by Science China Press and Springer-Verlag. Given that Springer is about to merge with Nature, the break-down of reasonable peer review in this case indirectly reflects badly on one of the most prestigious journal brands in all of Science (Springer is of course also highly regarded).

Monckton and friends' paper has been widely criticized already by ... and Then There's Physics, Jan Perlwitz in two articles and from Roz Pidcock at the Carbon Brief who quotes various other scientists on the topic. Since the essential argument is barely changed from Monckton's 2008 Physics & Society (P&S) article that I found full of errors I thought it deserved a bit of post-publication attention from me also. It really is astonishing that this work was approved by an editor for what looks like a reasonable scientific journal.

At first sight this article isn't as obviously nutty as some of those I've discussed here previously - the graphics and tables seem to be well designed, the reference section looks fairly substantive. The mathematics is once again pure algebra with not a sign of an understanding of the calculus invented by Newton and Leibniz a few hundred years back - and we'll get back to that. But other than the overly simplistic math, the paper may not strike the experienced editor immediately as absurd.

One does need to know a little of the history of Christopher Walter, Third Viscount Monckton of Brenchley. Barry Bickmore has a reasonably exhaustive list of his misdeeds, and Skeptical Science itemizes his repeatedly mistaken claims. He has a habit of misquoting or misinterpreting scientific experts. He has a very bad habit of "adapting" graphs created by others through cherry-picking, false extrapolation, or simply making stuff up, as for example documented in Monckton's deliberate manipulation at RealClimate, and William Connolley's Battle of the Graphs, only two of many examples. I pointed out many similar problems with Figure 1 and Figure 2 in his P&S article. In the following I'm going to talk as if Monckton were the sole author of the paper; undoubtedly his coauthors had some input, but the content seems very close to what he's written before and he seems to take full credit for it in the various discussions he's participated in on it so far (particularly at Perlwitz's blog).

So let's start with the figures in the new paper. I refuse to repeat something so misleading here so look up the paper for the original of Fig. 1 (and similarly for the other graphs I'll be discussing here). Note that Monckton's P&S article's first figure was roughly the 2002-early 2008 portion of this graph, and showed a striking cooling trend. Obviously further observational data have made the claims he made there quite obsolete: the warming continues. Note also that Monckton is showing monthly temperatures, rather than the much-more stable annual averages (which are used in Monckton's Figure 2 in the new paper - Figure 2 adds AR5 projections that look just fine compared to observations). The resulting display of month-to-month variance in the new Figure 1 makes the strong warming trend less visually noticeable. And, despite the caption's claim this is an average of 5 data series, the graph actually only shows the average of the two satellite datasets (UAH + RSS) as its title indicates.

As with Figure 2 in the P&S article, Monckton is again focusing in these first two figures on "too hot" "business as usual" projections from very early climate models (Hansen 1988 Secnario A in the P&S article along with the 1990 IPCC report ("IPCC FAR")), dismissing not only two and a half decades of further research that have greatly improved understanding, but more importantly the fact that these projections depended on certain very pessimistic emission scenarios which did not actually come to pass. You can view the 1990 report yourself directly from the IPCC site (only 414 pages for working-group 1, in contrast to the over 2000 pages in the fifth edition of the report issued last year). In particular, right after the "business as usual" paragraph in the summary that Monckton uses for these 2 graphs (nothing like that graph actually appears in any IPCC report) is another paragraph with much more cautious expectations for warming under non-exponential-growth scenarios:

under the other IPCC emission scenarios which assume progressively increasing levels of controls rates of increase in global mean temperature of about 0 2°C per decade (Scenario B), just above 0 1°C per decade (Scenario C) and about 0 1 °C per decade (Scenario D)

The main reason temperatures have not risen as fast as the IPCC FAR expected has nothing to do with the climate model translation of climate pollutant levels into temperature rise; rather it's that humans have actually been much better than expected at restraining the rate of increase of those pollutants into the atmosphere. Yay for us!! This RealClimate article explains the reason why those early "business as usual" projections were off - pollution did not follow the exponential "business as usual" trajectory, and the biggest factor is actually human cessation of emission of the chlorofluorocarbons (CFCs) that contributed to the ozone hole as well as global warming. The treaty that ended their use had the unexpected side-effect of also slowing down global warming.

Given the forcing we've actually been responsible for, the 1990 report's estimate of the warming rate is remarkably good (we're between scenario B and C). A far better comparison chart than Monckton's is Figure 1 from this comprehensive article from Skeptical Science comparing predictions with observations:


Figure 1: IPCC temperature projections (red, pink, orange, green) and contrarian projections (blue and purple) vs. observed surface temperature changes (average of NASA GISS, NOAA NCDC, and HadCRUT4; black and red) for 1990 through 2012.

In reality, what IPCC predicted in 1990 for near-term temperature rise was really close to what's been observed over the past 25 years, when you factor in the lower-than-expected forcings. There is no substantive discrepancy between models and observations.

You can tell there's something odd about Monckton's claims along these lines in this paragraph:

Since 1990, IPCC has all but halved its estimates both of anthropogenic forcing since 1750 and of near-term warming. [...]

and yet if we look at the issue of sensitivity which is generally the way climate models are examined (and which Monckton emphasizes in the rest of the paper with a "climate-sensitivity" model) the estimates from IPCC for equilibrium temperature response to doubling CO2 levels have barely changed over the years (see Monckton's table 8 SPM figures):

  • 1990 IPCC "FAR": 1.5 - 4.5°C, best guess 2.5°C
  • 1992 supplement: no change
  • 1995 "SAR": unchanged
  • 2001 "TAR": still 1.5 - 4.5°C range (no "best guess")
  • 2007 "AR4": 2 - 4.5°C is "likely", less than 1.5°C is "very unlikely"
  • 2014 "AR5": "likely" 1.5 - 4.5°C, extremely unlikely less than 1°C, very unlikely greater than 6°C

The range of estimates of Earth's climate sensitivity now is the same as it was 25 years ago. That is, the only sense in which "climate models have run hot", the only reason why that has happened, is embedded in Monckton's paragraph: the projected forcings were too large. There's nothing at all wrong with the models' estimate of sensitivity (well, except that it's still way more uncertain than we would like).

Ok, that's the first two figures that nicely answer the paper's title lead with no need of an "irreducibly simple model". Once again, yay us for keeping the pollution down. What about figure 3? This seems to be merely a slightly relabeled version of Figure 9.43a in AR5 WG1 (Chapter 9), which compared the Climate Model Intercomparison Project 3 (CMIP3 - used for AR4) derived feedback values with those derived from CMIP5 (used for AR5). Monckton correctly notes that the total derived feedback sum was lower in AR5, despite very close or slightly higher sensitivities. This apparent discrepancy between equilibrium sensitivities (obtained from running the climate models after a spike in CO2 levels) and the feedback sum is noted in the companion Figure 9.43b (which Monckton did not reproduce) and can be seen in more detail for each climate model in Table 9.5; the discussion in section 9.7.2.4 states:

The ECS can be estimated from the ratio of forcing to the total climate feedback parameter. This approach is applicable to simulations in which the net radiative balance is much smaller than the forcing and hence the modelled climate system is essentially in equilibrium. This approach can also serve to check the internal consistency of estimates of the ECS, forcing, and feedback parameters obtained using independent methods. The relationship between ECS from Andrews et al. (2012) and estimates of ECS obtained from the ratio of forcings to feedbacks is shown in Figure 9.43b. [...] On average, the ECS from forcing to feedback ratios underestimate the ECS from Andrews et al. (2012) by 25% and 35%, or up to 50% for individual models, [...]

You can see the confusion about this issue in Monckton's paper in the references to "Fig. 3":

As Fig. 3 shows, IPCC’s interval 1.9 [1.5, 2.4]W m-2 K-1 in AR4 [cf. 33] was sharply cut to 1.5 [1.0, 2.2] W m-2 K-1 in AR5. Yet, the climate-sensitivity interval [2.0, 4.5] K in the CMIP3 model ensemble [4] was slightly increased to [2.1, 4.7] K in CMIP5 [5].

and then:

the simple model reveals that the climate sensitivity 3.3 [2.0, 4.5] K in AR4 should have fallen sharply to 2.2 [1.7, 3.7] K in AR5 commensurately with the reduction of the feedback-sum interval between the two reports (Fig. 3). For the variance between the CMIP3 and CMIP5 projections of climate sensitivity is inferentially confined to the feedback-sum interval. If the CMIP5 models took account of significant net-positive feedbacks not included in AR5, Fig. 9.43, in the chart of climate-relevant feedbacks (Fig. 3), it is not clear why that chart was not updated to include them. The sharp reduction of
the feedback-sum interval in CMIP5 and hence in AR5 compared with the interval in CMIP3 and hence in AR4 mandates a sharp reduction in the climate-sensitivity interval, which, however, was instead increased somewhat.

Or could it be that Monckton's "simple model" is in fact wrong? There were some changes in analysis of the models between CMIP3 and CMIP5 that may explain some of the problem. But if you actually read the underlying papers definining feedbacks you find sophisticated discussion of linearization and partial-differential equations - complications of calculus nowhere to be found in Monckton's paper. As the IPCC report makes clear, the actual sensitivity found from running climate models in CMIP5 is more than one finds from the sum of derived feedback terms. Monckton's model simply fails to apply.

So Fig.3 itself is perfectly reasonable, just misunderstood by the authors. What about Fig. 4? This seems to be an exact duplicate of Figure 6 from the paper's reference 38 (Roe G (2009) - "Feedbacks, timescales, and seeing red.") - note that other than via the time-dependent climate model used by Roe (and a resulting factor rt), there is no time-derivative or dynamic component to anything in Monckton's paper. Table 2 seems to be an extraction of numbers from Roe's figure with some extrapolations. I note with some amusement that Fig.4 in the new paper seems to allow for climate sensitivities of 10°C or more!

This generosity in considering possibly very high sensitivities is extended to Monckton's Fig. 5 where the left-hand scale indicating sensitivity to doubling CO2 goes up to 25°C! Of course the purpose of Fig. 5 is to indicate that the regime with climate sensitivities above 2 or 3 degrees is "unstable" due to the rapidly increasing slope of sensitivity vs. closed-loop gain, and therefore (the main conclusion of the paper) not to be considered. This was a theme in Monckton's P&S article as well - incredulity that the feedbacks could put Earth so close to an instability. Well, as others have pointed out, Earth sure seems to behave as if it sometimes has an unstable climate (the ice ages, and the increasing evidence for "snowball Earth" episodes, for example). The last 8000 years or so have been very nice and quiet, climatically, but there's plenty of evidence recent climate was not Earth's normal state.

All these close-to-reasonable figures must have left a pent-up urge to "go wild", because Fig. 6 is a real doozy. We've already dealt with the IPCC (1990) (and by extension Hansen (1988, scen. A, business as usual)). Monckton includes additionally what he calls "IPCC (2007)" projecting a rise of 0.38 K/decade, "IPCC (2013 2nd draft)" projecting 0.23 K/decade, and "IPCC(2013 final draft)" projecting 0.13 K/decade. Where on earth did these numbers come from? There is no explanation of any of these in the text, no specific citations from the caption or discussion in the text.

To take the first one, IPCC (2007) presumably refers to projections presented in IPCC AR4 WG1 - the text in the summary for policy makers, p. 12, "Projections of Future Changes in Climate" states:

For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected.

so no mention of 0.38 K/decade there! Following the references to sections 10.3 and 10.7 of the main report we find Figure 10.4 showing multi-model means of surfacing warming for various emissions scenarios. Here the high-GHG-growth scenario A2 does show a warming of over 3 K by 2100 (3.13 degrees C per century according to Table 10.5 and surrounding discussion) - but the temperature change is certainly not linear in time, it accelerates over the century in this scenario. Over the first 30 years the A2 scenario has a warming of 0.64 K, or 0.21 K/decade, rather than the 0.31 K/decade a linear extrapolation would give. In any case this doesn't explain where Monckton got 0.38 K/decade from!

Table SPM.3 in the IPCC AR4 WG1 report shows projected temperature changes to 2090-2099 relative to 1980-1999 under the various forcing scenarios, with ranges from 0.3 K (low range for constant Year 2000 concentrations) to 6.4 K (high range for A1F1 scenario). Did Monckton just take some average of these numbers and plot a straight line in his figure 6? The mind boggles. The actual short-term projection in AR4 (IPCC 2007) was exactly as stated above, about 0.2°C per decade, or roughly half the slope Monckton showed in this figure.

Monckton's AR5 numbers are surely similarly confused but I don't think we even need to get into them, the range 0.13 to 0.23 is certainly reasonable whatever his original source was for them. The real question here is why the green part of figure 6, labeled "Observations", is so low (0.00 to 0.11 K/decade ?). And here we have a wonderfully subtle case of cherry-picking. Perhaps the referees had become tired of all the silly stuff and this just slipped right by - but look at the labels on those two curves in the green section:

  • OBS: HadCRUT4, 63 yr
  • OBS: RSS, 17 yr

Why a 63 year trend when we're supposedly talking about "recent" warming? Note the graph starts in 1995 (axes labels at least - the straight lines look like they start outside the axes in 1990). 63 years would take us back to 1951 or 1952. Monckton is averaging accelerated warming from recent years with slower warming from decades ago to lowball the observational number here. Skeptical Science has a nice Trend Calculator that confirms the 63-year trend for HadCRUT4 is indeed 0.113 K/decade. But if you start in 1990 (as this graph seems to) it's 0.143 K/decade. From 1980 it's 0.159 K/decade. Both of those are comfortably within the IPCC AR5 range in Monckton's graph - but he chose to plot a 63 year trend, not the recent trend. I wonder why?

As for RSS, similarly if you plot from 1980 the trend is 0.123 K/decade, and from 1990 it's 0.108 - slightly below but not far from the IPCC range Monckton plotted. Even though RSS has been trending low recently relative to all the other temperature datasets, it's really not that far from expectations. Of course if you start in 1997 or 1998 (Monckton's 17 years) the trend is very close to zero. Wonderful cherry-pick there.

So, Figs. 1 and 2 greatly overstate the IPCC 1990 projections by showing only the BAU emissions scenario which we luckily did not follow. Fig. 6 is a combination of outright fabrication and unbelievable cherry-picking. The other figures are reasonable (if misinterpreted in the text) - though Figs. 3 and 4 are essentially copies from elsewhere. Half the figures being reasonably good is actually an encouraging improvement over the P&S paper. What else has Monckton learned in the intervening 6+ years?

Well, there's almost nothing in this paper about temperatures falling (other than the RSS cherry-pick in Fig. 6). Of course temperatures have risen considerably since January of 2008 so the naive projections of imminent ice ages from Monckton and friends at the time seem rather silly now. There's nothing about the 1940-1975 cooling, nothing about cooling Antarctica, no mention of an "absence of ocean warming" - in fact the paper briefly acknowledges the "uptake of heat in the benthic strata of the global ocean". There's no mention of ocean oscillations, though there is a long list of citations for "validation failure in complex general-circulation models" which may be where such things are hiding in this paper. There's no mention of the influence of the Sun, nor global warming on other planets.

The new paper does not approvingly use (and extrapolate) McKitrick's claim that warming is actually only half what's been observed, though it does cite the 2007 McKitrick and Michaels paper among the "validation failure" articles (though it had nothing to do with validating GCMs!) There is no appeal to "chaos" to dispute quantification of confidence. There don't seem to be any attempts in this paper to overly criticize the IPCC (aside from misrepresenting the FAR scenarios), indeed the paper seems to defer to IPCC statements and results to a large degree, though occasionally misinterpreted or misunderstood (as the Fig. 3 example shows).

There are some new odd complaints in the second section about model failures regarding "altocumulus clouds" and mid-Holocene temperatures in China, but this seems quite half-hearted (and largely irrelevant - models are imperfect and will never get everything right) compared to the P&S diatribe on the subject. So I'd rate the new paper much improved in its presentation of introductory material (main problem is just the IPCC FAR BAU misrepresentation).

What about the meat of the paper, the "irreducibly simple climate-sensitivity model", embodied in equation 1. This is essentially the same as the model presented in the P&S paper (eq. 1 there as well) with the exception of a new, rather interesting time-dependent term. Let's present them side-by-side for comparison:

  • P&S:

    ΔTλ = ΔF2x κ f = ΔF2xλ

  • New paper:

    ΔTt = qt-1ΔFt rt λinfinity
    = qt-1ΔFt rt λ0 G

(the new paper has some elaborations on this which we'll get to in a minute, but let's start with this).
In the P&S paper, the change in temperature was seen as dependent on the feedback parameter λ. Now the change is dependent on time t. In the new paper the forcing change has been subdivided into a CO2-only component (ΔFt) and a multiplier to get the other radiative forcings (qt-1). Note that in the discussion there seems to be a bit of confusion over whether this 'q' factor is a ratio of CO2 forcing to all GHG forcings, or to all anthropogenic forcings (including, for instance, aerosol forcings). The factor 0.7 Monckton mentions is appropriate to the GHG fraction but not to the net total forcing - however, I don't believe this matters much for the comparisons as the key issue is the sensitivity to a particular overall forcing change, which is independent of how you split that forcing up between CO2 and other things. The term λ (feedback parameter) in the P&S paper is now λinfinity. The term κ (base, "no-feedbacks" sensitivity) is now λ0. The feedback multiplier f is now the closed-loop gain G. There is only one substantively new term in Monckton's new paper - the "transience fraction" rt. This is derived from Monckton's ref. 38 (Roe, discussed above) also shown in Fig. 4 and Table 2.

There is, however, along with this, a new meaning to all the terms due to the time dependence introduced by rt. This time dependence doesn't make a whole lot of sense, as we'll get to, but it is an interesting twist. First note that eq. 1 is inconsistent between the 3rd and 4th right-hand expression - or at least somewhere between the 1st and 4th. λinfinity should be a time-independent constant, the final total response. However, by the 4th expression it has become λ0(1 - λ0 ft)-1, which has the time-dependent feedback term ft. This actually makes some sort of sense - in addition to the transient temperature response introduced by rt (due to slow responses such as ocean warming) the feedbacks themselves operate on different timescales, so one should expect the closed-loop gain and associated feedback factors to change depending on the time interval you are looking at.

I believe it actually is a unique and new model as Monckton has claimed - so why doesn't it make much sense?

The problem is the way a "transient temperature response" would actually work. Suppose the forcing changes by a small amount at time t = 0. Then the temperature wouldn't change instantly but would follow a pattern as in Monckton's figure 4 (originally from Roe) - that's rt, increasing over time. But suppose we then add another forcing change at time t = 1 year. The response to that forcing should be similar, but delayed by a year - rt-1 rather than rt. Similarly for a forcing change at t = 100 years, the relevant factor is rt-100, not rt.

But Monckton's equation just multiplies rt by whatever the current forcing qt-1ΔFt happens to be. That's wrong. What you need to do is a convolution - the integral of the product of the incremental forcing change at time s (say F(s)) with the response rt - s, integrated from s = 0 to t (or from as far back in the past as you can go). Of course the convolution reduces to a simple product if the forcing change is just a step change (but then there is no time-dependence on the forcing itself, other than the step).

In actually applying the equation Monckton seems to assume a step function in any case - at least in section 7 and table 4 - ΔFt is just a constant, as if forcing suddenly jumped up in 1850 to the 2011 value. And the discussion there on "committed but unrealized warming" is nonsensical - that's exactly what (1 - rt) gives you, the committed but unrealized warming- which even in Monckton's model here is between 0.4 and 1.0 times the warming so far.

In short - to do this sort of analysis properly, calculus is desperately called for, at the least to do the integrals required for convolutions (and the differentials to get proper time-dependence of forcing changes). This level of mathematics seems to be beyond Monckton at the moment. Maybe in another 6 years we'll have finally made that leap.

There are a number of additional improvements from the P&S article I should note. Monckton is no longer willing to seriously question the IPCC estimates for radiative forcing - in the P&S article he just arbitrarily divided the forcing by a factor of 3 ("to take account of the observed failure of the tropical mid-troposphere to warm ..."). He no longer seriously questions the standard value for the "no-feedbacks" (Planck) response - as Fig. 3 in the new paper shows, there is very little disagreement in that value between models, it is not an uncertain quantity at all, contrary to the claims Monckton made in P&S.

The central realm of uncertainty in understanding climate has been for a long time in the feedbacks - particularly the cloud feedback. At least that concern has taken center stage in the new paper - but there is still an echo of wishful thinking involved - in P&S we had:

the feedback-sum b cannot exceed 3.2 W m–2 K–1 without inducing a runaway greenhouse effect. Since no such effect has been observed or inferred in more than half a billion years of climate, since the concentration of CO2 in the Cambrian atmosphere approached 20 times today’s concentration, with an inferred mean global surface temperature no more than 7 °K higher than today’s (Figure 7), and since a feedback-induced runaway greenhouse effect would occur even in today’s climate where b >= 3.2 W m–2 K–1 but has not occurred, the IPCC’s high-end estimates of the magnitude of individual temperature feedbacks are very likely to be excessive, implying that its central estimates are also likely to be excessive.

while in the new paper:

A plausible upper bound to f may be found by recalling that absolute surface temperature has varied by only 1 % or 3 K either side of the 810,000-year mean [40, 41]. This robust thermostasis [42, 43], notwithstanding Milankovich and other forcings, suggests the absence of strongly net-positive temperature feedbacks acting on the climate.
...
Also, in electronic circuits, the singularity at ginfinity = +1, where the voltage transits from the positive to the negative rail, has a physical meaning: in the climate, it has none.

Hmm. Actually in the climate (or any other system with feedbacks of this sort) the singularity does have a real physical meaning - that's where the system actually becomes unstable and "runs away". Runaway warming - until feedback parameters change sufficiently to bring it back into a stable regime again.

Monckton does in all this bring up a good point - but contrary to his conclusion that instability is impossible, the reality is our climate system may not be far from instability of this sort, which could lead to abrupt change (change that happens on the time-scale of the feedback term that puts us over the edge). It's well known that if our Sun were just a bit hotter or we were a bit closer, our oceans would boil (as may have happened to Venus). Given that the Sun is gradually increasing in brightness, we have maybe a billion years or so before that happens. That instability is real; there may be others of shorter duration (related to icesheet feedbacks perhaps) that are also a serious concern.

Anyway, of course the conclusions etc. after this assumption that g has to be less than 0.1 are total nonsense, I really don't need to go further down that hole. The first half of the paper is pretty reasonable, except for the lack of calculus and apparent incomprehension of the need for it. Maybe the reviewers got bored and just didn't notice how crazy it got towards the end. Or maybe something else happened in review.

Can you trust papers published in Science Bulletin? For sure not this one!