Mathematical analysis of Roy Spencer's climate model

BYU geologist Barry Bickmore recently reviewed Roy Spencer's recent book, "The Great Global Warming Blunder", finding a number of true "blunders" by the author. In particular he found some very peculiar properties of the simplified physical model that Spencer made a central feature of the book, finding that Spencer's curve-fitting allow infinitely more solutions than the one Spencer somehow settled on, and a number of related issues.

I tangled with Spencer over an earlier model like this which he was promoting more than 3 years ago. What he didn't seem to realize about that first model was that it was essentially trivial, a linear two-box model with two time constants (a subject I explored in detail here a while back). I tried explaining this, but he seems not to have gotten my point that such a model inherently contains no interesting internal dynamics, just relaxation on some (in this case two) time scales. Which seems to go completely against the point I thought he was trying to make, that some sort of internal variability was responsible for decadal climate change.

So it was something of a surprise to me that Spencer based his "Great Blunder" book on an even more simplified version of this model, with just 1 effective time constant. He even tried to get a paper published using this essentially trivial model of Earth's climate. As Bickmore outlined in his part 1, the basic equation Spencer uses is:

(1) dT/dt = (Forcing(t) – Feedback(t))/Cp

where T is the temperature at a given time t, Forcing is a term representing the input of energy into the climate system (there is a standard definition for this in terms of radiation at the "top of the atmosphere") and Feedback is a term that itself depends on temperature as

(2) Feedback(t) = α (T(t) - Te)

with α a linear feedback parameter and Te an equilibrium temperature in the absence of forcing (Bickmore and apparently Spencer don't actually use absolute temperature T and equilibrium value Te, but rather write the equations in terms of the difference Δ T = T - Te, which amounts to the same thing, but obscures an important point we'll return to later).

The final term Cp is the total heat capacity involved. Each of forcing, feedback and heat capacity is potentially a global average, but would normally be expressed as a quantity per unit area, for example per square meter. Since the bulk of Earth's surface heat capacity that would respond to energy flux changes on a time-scale of a few years is embodied in the oceans (about 70% of the surface), Cp should be defined essentially as 0.7 times the heat capacity of water per cubic meter, multiplied by the relevant ocean depth in meters (h):

(3) Cp = 0.7*4.19*10^6 J/(m^3 K) * h = c * h

where c = 2.9 MJ/(m^3 K) (Spencer and Bickmore seem to have forgotten the factor of 0.7, so use a slightly larger value for c, which means their h values are probably smaller than the actual ocean depth such a heat capacity would be associated with).

Rewriting equation 1 with these definitions we wind up with:

(4) dT/dt = (Forcing(t) - α (T(t) - Te))/(c h)

as the essential equation of Spencer's model. In the case where there is no forcing, Forcing(t) = 0, the model then reduces to:

(5) dT/dt = - (α / c h) (T(t) - Te)

Note that α has units of watts per square meter per degree, while c h has units of joules per square meter per degree. Since 1 W = 1 J/s, the ratio has units of inverse time (as it should to match the time derivative on the left-hand side). If we define a time constant

(6) τ = c h /α

Spencer's model equation (in the case of no forcing) becomes simply:

(7) dT/dt = -(T(t) - Te)/τ

This is one of the simplest possible first-order differential equations, and using the properties of derivatives of the exponential function it is easy to show the most general solution of this equation is:

(8) T(t) = Te + A e-t/τ

This means that, whatever the initial value of the temperature at time t = 0, say, this model forces the temperature to come exponentially close to the equilibrium temperature Te, with a characteristic time-constant τ. After one time period τ, the difference between T(t) and Te is a factor 1/e smaller than at first. After two time periods τ it's 1/e2 smaller, and so on.

In particular, Spencer's model has absolutely no internal dynamics other than a simple exponential decay to equilibrium.

So what is the relevant time-constant for Spencer's model? From eq. 6 and 3 we get:

(9) τ = 2.9 MJ/(m^3 K) * h/α * 1 year/31.6*10^6 s
= 0.092 h/α years

if h is expressed in meters and α in W/m^2K (note that the scale factor is about 0.13 for Bickmore and Spencer, without the 0.7 factor for ocean area). If ocean depth h = 700 m and α = 3.0 W/m^2K as Spencer apparently claimed to find from fitting this model to observed temperatures (with a certain forcing, which we'll get to) that gives a time constant τ = 21 years (or almost exactly 30 years without the ocean area factor).

You would get exactly the same time constant with h = 350 and α = 1.5, or h = 70 and α = 0.3. It is determined entirely by the ratio of ocean depth to the feedback parameter. In particular, a fit of temperatures using just this model (remember without any forcings) could not possibly determine both depth and feedback parameter, it could only determine the ratio of the two, because the response depends only on that ratio!

This single time-constant model (as it is so very simple) has in fact been used at least a few times before to try to model the observed temperature changes of the past century or so. Stephen Schwartz used just such a model in a 2007 paper, "Heat capacity, time constant, and sensitivity of Earth’s climate system", published in Journal of Geophysical Research volume 112, D24S05. That paper found a time constant of 5 years - but it was quickly criticized directly on the assumption that Earth's climate was governed by one single time constant. A later followup paper increased the likely time constant to 8.5 years. Lucia Liljegren has used essentially the same model, calling it "Lumpy", to fit historical temperatures with a single time-constant using the GISS Model E forcings, and found a time constant of 14.5 years. In that context Spencer's 30 year time constant seems a little long, but not out of the question - he was fitting a somewhat different "forcing" than Liljegren, for instance.

The mathematics for completely solving this simple single-time-constant model even with a forcing is hardly more complicated than the case with zero forcing. Let's abbreviate forcings to F(t):

(10) dT/dt = F(t)/c h - (T(t) - Te)/τ

and introduce a function A(t) so that:

(11) T(t) = Te + A(t) e-t/τ

Then taking a derivative gives:

(12) dT/dt = dA/dt e-t/τ - A(t) e-t/τ

so (10) becomes:

(13) dA/dt = F(t) et/τ/c h

which is solved by a straightforward integral:

(14) A(t) = A0 + ∫-∞t F(s) es/τ ds / c h

So the full solution for temperature in Spencer's model can be expressed directly in this integral form (no "computer modeling" required at all):

(15) T(t) = Te + A0 e-t/τ + ∫-∞t F(s) e- (t - s)/τ ds / c h

The forcing F(t) appears here weighted exponentially in time, so that more recent (s close to t) values of forcing contribute more strongly, but there is some exponentially lower contribution with time-constant τ going back as long as you like. This expression is a convolution (essentially a smoothing) of the forcing with the exponential function, which we can express by a new function V(t):

(16) V(t) = ∫-∞t F(s) e- (t - s)/τ ds

so the temperature as a function of time in Spencer's model is in the end given simply by 3 terms:

(17) T(t) = Te + A0e-t/τ + V(t)/c h

i.e. temperature relative to equilibrium is an exponentially decaying transient term plus a convoluted form of the forcing function divided by heat capacity.

When you have known real forcings, for example the GISS Model E forcings used by Liljegren, for any given value of the time constant τ you can determine V(t), and then fitting the T(t) curve of equation 17 to observed temperatures constrains you to a real value for the effective ocean depth h. Varying both you can find a best fit for both parameters, h and τ.

This is where Spencer really ran into trouble, though, as can be seen from this figure in Bickmore's review:

The red line shows 24 "best-fit" curves for different values of ocean depth 'h' - and somehow they all lie on top of one another!

Spencer's problem here was that he introduced a third variable, β which he also tried to fit at the same time. Instead of using a known forcing F(t), he made an assumption that a measure of forcing was a certain ocean oscillation index, PDOI, and then allowed that to be scaled by an unknown factor β

The same equations as above apply with F(t) = β PDOI, and we can convolute to a V(t) = β Q(t) in the same way, for given time constant τ. That turns the solution equation for temperature (eq. 17) into:

(18) T(t) = Te + A0e-t/τ + β Q(t)/c h

The original three parameters α, β and h (Te and A0 are additional free variables we'll get to in a minute) appear in this equation for temperature only as ratios - τ = c h /α, and in the term β/c h multiplying Q(t).

That means, whatever is done regarding the other two free variables, using this model to fit temperatures cannot possibly constrain the absolute values of those original three. A fit constrains the ratio of α to h and β to h, but their actual scale is completely free. Spencer's claim that he ran these models with 100,000 different combinations should be highly embarrassing to him - he could have run them infinitely many times and this model simply cannot constrain the magnitude of the feedback parameter to within an arbitrary scale factor (with the same scale factor multiplying ocean depth). That is exemplified in Bickmore's graph of best-fit α, β values vs. h:

Spencer's claim that his model shows climate sensitivity to be low is truly embarrassing given that absolutely any nonzero value for α would give exactly the same temperature curve.

That's not quite the end of it - Spencer's other claim was that this model showed that 20th century temperatures were well modeled by the PDOI. But that depended on having those additional parameters Te and A0 to play around with, as Bickmore showed. In particular, the effect of freedom in choice of A0 is clearly seen in Bickmore's figure 8:

where different initial starting choices for the temperature in the year 1900 show their exponentially decaying contribution to temperatures, with 30-year time constant. That one arbitrary transient does much of the work in fitting the early part of the 20th century temperature curve, but it has no relation to the ostensible forcing involved at all.

Finally, as Bickmore's Figure 9 shows:

the choice of equilibrium temperature Te also has a strong effect on the fit. Changing Te to correspond to temperatures of the last half of the 19th century rather than the most recent 1961-1990 WMO "normal" period makes Spencer's ostensible fit from PDOI (the red curve) look very poor indeed. Why does he need an equilibrium temperature for his system that is already about 0.5 degrees C warmer than pre-industrial temperatures? Choosing that equilibrium temperature means most of 20th century warming is already built into Spencer's model - so using it to claim PDOI explains recent warming is, once again, just embarrassing.

I can understand preparing a scientific paper with faults of this sort - it's certainly easy to fool ourselves when we think we're on to something. But publishing a book on the subject? Doesn't Spencer have some smart colleagues he can run this stuff by first? Of course it really, really doesn't help that the book is also full of attacks on other climate scientists. Bickmore's most damning quote from Spencer's book suggests a certain hubris:

I find it difficult to believe that I am the first researcher to figure out what I describe in this book. Either I am smarter than the rest of the world’s climate scientists–which seems unlikely–or there are other scientists who also have evidence that global warming could be mostly natural, but have been hiding it. That is a serious charge, I know, but it is a conclusion that is difficult for me to avoid. (p. xxvii)

The first thing a true scientist should think of in a situation like this doesn't seem to have even occurred to Spencer. "What if I'm wrong?" He was.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Marvellous. Can you plot

Marvellous. Can you plot Q(t) for us, for a few different values of τ, so we can see what this "forcing" looks like? With Spencer's fitted τ it should be (proportional to) the middle curve in Bickmore's figure 8 (which has T_e and A_0 both zero), I suppose.

Hmm, I'll have to dig up the

Hmm, I'll have to dig up the source PDOI data for that. Barry sent me his Matlab stuff but the main thing seems to be a binary file I can't read (I don't have Matlab)... Anybody with a URL pointer to the numbers feel free to point me to it!

PDO data at

PDO data at http://jisao.washington.edu/pdo/ and http://jisao.washington.edu/pdo/PDO.latest

My beef with Spencer's correlation is that PDO is a the first principle component of a sizeable chunk of the HadCRUT3 data. If this was a domain most folks felt they had some knowledge of, like money management rather than climate data, the idiocy of the analysis would be apparent. Saying the relationship between PDO and HADCRUT3 explains most of the variation leaving nothing for other processes is like saying the 0.9 correlation between DJIA and S&P500 means that no other processes cannot be responsible for more than 10% the stock market variation--It's dumb because they are measuring the same thing. Or medically: sublingual temperature explains 95% rectal temperature, so other medical theories for fevers are worthless.

The big hole in this solution is that if you are trying to use this dumb model to predict the S&P500 (or HadCRTU3), you first need a model that predicts DJIA (or PDOI.)

Thanks, see follow up post

> I can understand preparing

> I can understand preparing a scientific paper with faults
> of this sort ...

Well, I cannot. You show above that the failure can be seen by calculus learned during the first year in university, and is caught by simple sanity checks. If a working physicist finds himself submitting work containing such mistakes, he really ought to spend a long time in front of the mirror looking for what's wrong...

Isaac Held just wrote a blog

Isaac Held just wrote a blog post on the same model, comparing it to GFDL's climate model.
http://www.gfdl.noaa.gov/blog/isaac-held/2011/03/05/2-linearity-of-the-f...

Has Spencer already responded

Has Spencer already responded to that allegations?

What if he acknoledges his faults?
Will he retract his submitted paper or getting published by E&E?
What will he do with his book?

Questions, questions, so much questions....

You say you plugged in the

You say you plugged in the GISS Model E forcings from TOA radiative effects, but surely Spencer's claim is that there are additional forcings resulting from chaotic internal variability - for example, cloudiness variations. Do the GISS forcings include these?

I thought Spencer's novel claims were about the forcing/feedback distinction, and whether it was appropriate to add upwelling SW and LW at the TOA to derive the TOA forcing when they have different effects at the surface, and he only plugged the data into a noddy model to get a first order estimate of what it meant. He first showed how the conventional feedback diagnosis method applied to the idealised situation with a step-function forcing gives the right answer (saying much the same as you just did), then showed how quasi-random fluctuations in forcing messed this up. Is there much progress to be made in spending a lot of time criticising the noddy estimate, and ignoring what he claims to be the main point?

NiV, If you read Part 1 of my

NiV,

If you read Part 1 of my review that Arthur linked, you will see that Spencer had to put in unrealistic parameter values to get the feedback behavior he wanted.

Granted. But then so far as I

Granted. But then so far as I can see, so does everyone. Including those fitting CO2 models to the 20th century rise. It's what I would expect, given that I don't believe there is any one single parameter that moves the entire climate along a one-dimensional track, with everything else ignored or insignificant.

I enjoyed your previous review. Regarding water depth, I would say it was also incorrect to use the actual thickness of the mixed layer. If you look at data for (coastal) surface water temperature, you will see that it varies up to 10 C over the 6 months between summer and winter. The top-most surface layers react quite fast, and are not acting with significant inertia when it comes to long term changes. The layer responding to changes on timescales between a year and a decade (say) is thinner. If heat conduction into the oceans was according to the normal 1D heat diffusion equation, then the depth penetrated by a change is proportional to the square root of the characteristic time. What the formula is for the turbulent, convective, biologically active oceans I have no idea. However, I suspect that it is smaller than the full 700 m.

The appropriate value to use depends on whether you're interested in converting heat to temperature, or in the time lags that the inertia induces. Clearly, for converting heat to temperature, the change in average temperature of the near-surface is relevant. Fundamentally I think it comes down to people using temperature anomalies and treating them as if they were the temperature. In some sort of annually smoothed sense, it's arguable, but the physics is all wrong. To be able to model the physics at all realistically, you have to include the summer-winter variation in forcing, use real rather than anomaly temperatures, and then have a proper time-dependent Cp. Only then would you be able to approximately assess the appropriate depth to use for the time lag on a 'physics' basis.

As I indicated above, I don't regard this sort of curve-fitting activity as useful for anything other than countering other people's curve-fitting. There are too many unknowns.

NIV - people do far more than

NIV - people do far more than "curve fitting" to justify the influence of CO2 on temperatures, the fact that something like "Lumpy" (which is Lucia Liljegren's model, not mine) does pretty well with realistic parameters for transient sensitivity and heat capacity is just one small line of evidence; matching 20th century temperature rise is one of the least strong constraints on sensitivity if you look for example at the Knutti-Hegerl review. This is for two reasons - (1) the forcings aren't well known (aerosols in particular) for much of the time period, and (2) the temperature change so far is relatively small, compared to for example the ice ages or earlier paleoclimate examples.

More importantly, the central problem with Spencer's argument for PDO (or any other oscillation) driving warming is that he is trying to fit a rising curve to something that oscillates up and down. The only part of the 20th century curve you can really match that way is the part that goes sort of up-and down - the middle 1930 to 1980 bit. He can't match the recent rise at all well, nor can it match the early 20th century rise to 1930 - unless he puts in a term completely unrelated to the oscillation he's supposedly basing the argument on (the A0 piece).

In contrast, when you model with forcings that really are going up (CO2 and other long-lived GHG's, moderated by the others in the GISS list) you don't need any extra fake parameters to make it work. That's the point of these two posts of mine, perhaps I should have been a bit more explicit about that.

A simple parameter count tells you the difference - Lumpy has 3, one of which is the baseline temperature for zero forcing, which should correspond to temperatures in the late 1800's, and it does. Spencer's model has 5, one of which is a bogus scale factor (as discussed in this post) so the real flexibility is 4 parameters, again one of which is the baseline temperature for zero forcing, but which Spencer sets not at the late 1800's temperature but at the 1961-1990 average.

Using a model to generate a

Using a model to generate a curve that 'fits' observation is a case of 'confirming the consequent'. It's only of use if you can eliminate all other models and parameter choices. It is especially problematic if the model and parameters are derived in any way from the observations - the logic becomes circular in that case. This can happen in subtle ways - such as the 'bottom-drawer' problem in which studies and models that don't fit observations don't get published, so even if the studies that did get published were genuinely generated ab initio from the physics, there is a selection effect that has already ensured consistency. There is a problem here with aerosol forcing estimates, which it is suspected have been derived by seeing what forcing is required to get particular models to match the observations. The problems with using models to validate the theories on which the models were based have been known for a long time.

Fitting a rising curve to something that oscillates up and down is easy enough; all you have to do is fit the data to its integral. For that matter, I can fit a rising curve to something that is perfectly flat: stochastic trends in ARIMA-like processes being an obvious possibility here. An ARIMA process plus several oscillatory processes (there's more than one climate oscillation) plus several rising and falling trends plus some lumpy things that are poorly measured and mostly unknown, pushed through a very likely non-linear function - it's a big space to explore. The big problem with blind curve-fitting is the unimaginable range of possible functions and models to fit to. A modeller picks some space of functions, and finds the best fit to the data within that space, and thinks something has been proved thereby. But it is a form of the fallacy of argument from ignorance: I cannot find any better alternative model, therefore there is no better alternative model. Nor does simplicity help distinguish theories if you already know on other grounds that the physics is not simple. You need to have near complete understanding of a physical system before you start modelling - at least enough to know what space to explore - if you plan to use the model to do more than explore the implications and consequences of your hypothesis.

Making the observation that Spencer's model is not a perfect fit, is not a complete model of the climate, and doesn't prove anything anyway is a perfectly valid thing to do. Ideally, you need to generalise the point and the principle, so it can be applied to all examples of model-fitting to be able to tell whether they're providing anything useful. But I wouldn't read too much into what this means for Spencer's thesis given that I strongly suspect that he doesn't intend it as more than an explanatory example; a toy model for exploring ideas, building intuition, etc.

I understand the book in question is aimed at the general public, and scientists have been dumbing-down for public consumption for a long time (not that I approve) - but would you say that starting with a fully realistic model in a book aimed at the general public was appropriate? Is this not the equivalent of the "the greenhouse effect works like a greenhouse, letting shortwave visible light in and trapping longwave radiation"-classic we see from the likes of Al Gore and NASA?

Have you tried asking him?

NIV - you're not making

NIV - you're not making sense. The integral of a sinusoid is another sinusoid, not a rising curve. Physics bounds temperature (temperature = energy/heat capacity) so random walks of ARIMA type ("internal variation" - i.e. independent of external causes - like GHG forcing changes - that can change energy flows) cannot be physically valid because they must be bounded. You can try modeling with such a statistical approach for a while, but eventually it has to break down when the statistics diverges from the physics.

In contrast, physical explanatory models are universally valid. The same basic thermodynamics, radiative-convective modeling and dependency on solar input, atmospheric composition and surface albedo describes the ice ages and Earth's surface temperatures across hundreds of millions of years. The same physics explains surface temperatures on Venus, Mars, Titan, and any other planet or moon where we have sufficient observations to compare with the theory.

If a book is intending to prove its case, it had better get the essence of what it is trying to do right. This is in no way equivalent to an issue of simple analogies, such as the greenhouse one (which captures some but far from all of the relevant physics of the atmospheric case), as in that case it is universally presented as representing something which is a true scientific consensus for which there is a huge detailed literature that describes the problem and how it is solved. If you don't believe Al Gore, go read IPCC AR4 WG1; if you don't believe that, read from among the thousands of peer-reviewed papers and previous reports that it cites. The details are all there in excruciating detail.

For Spencer's book, the only details we have are what's in his book and what he posted on his website. It disagrees with the scientific consensus on the subject, and there is no background source that explains it in any further detail. I haven't asked Spencer anything on the subject - but I think it would be a good idea for somebody to do so.

It depends what you mean by a

It depends what you mean by a sinusoid. I interpreted "oscillates up and down" as potentially of the form a+b sin(ct+d). Is that a sinusoid? Does it oscillate up and down? What is the physical meaning of the zero of the scale, here?

In discussing ARIMA, I was actually thinking of the special subset of ARMA processes - I tend to think of AR, MA, ARI, IMA, ARMA, and ARIMA all in terms of the general case. (Apologies!) But just because the model eventually breaks down doesn't mean it isn't a useful model within limits. Physicists happily use the random walk to model Brownian motion, even though a finite planetary atmosphere means the model must eventually break down. I gather the evidence is that one of the characteristic roots of the global temperature time series is so close to the unit circle as to be indistinguishable with a century of data, so on this timescale or shorter, modelling it as an ARIMA process is arguably justified.

However, my point was that even with an ARMA process, it's very easy to get stochastic trends without any increasing input. Do you agree?

Physical explanatory models are universally valid if they are done exactly. Physical models that are approximated are universally invalid. As Box once said, all models are wrong, but some are useful. Given that climate is more complicated than the Navier Stokes equation, and given that we're not even sure if Navier-Stokes *has* general solutions, let alone be able to find them, I'm not convinced by appeals to the laws of physics. Modelling the atmosphere with boxes 100s of km on a side, you cannot *seriously* be appealing to the universality of physical law to support your number-crunching.

If a global campaign to turn around the industrial economy of all nations intends to prove its case, it had better get the essence of what it is trying to do right. The greenhouse effect doesn't work like a greenhouse. A greenhouse doesn't work as they say, and neither does the greenhouse effect. (Nor is it an explanation for the dangerous/catastrophic predictions bit.) And it certainly hasn't always been presented as a "simple analogy", it has usually been presented as *how it works*, often with the subtext that anyone who purports to deny such a simple and obvious mechanism must be ignorant of basic physics. Finding an explanation of how it really *does* work in popular presentation is remarkably difficult.

While I haven't read every line of AR4, I have looked at a lot of it, and I know that the evidence and explanation is not there. It may indeed be somewhere in the thousands of peer-reviewed (and non-peer-reviewed) papers it cites, but I don't know where, and trying to bury enquirers in paper is no way to convince them that your argument is right. A lot of people tell me that there are mountains of evidence, the details are all there, but usually if I keep asking questions, it turns out that they haven't actually seen/read them themselves. They've been told that they're there, pointed to the start of the maze of interlinking, mutually-contradictory papers, and have decided it's too much trouble and have taken somebody's word for it. And none that I've asked have been able to give a concise, complete, and entirely valid chain of evidence and argument as to what the actual evidence being relied upon is. It keeps on shifting; and if you ever get some sort of part answer, and you point out the many flaws in it, the response is always that it "doesn't matter" because there's plenty of other evidence. "Piles" of it.

I don't know. Maybe it *really is* too complicated to give a concise explanation to outsiders, even other scientists like me. Although if so, I'm mystified as to how so many scientific organisations are able to declare their support. Maybe everybody is too *busy* to extract such a summary, filtering out the known bad results and mapping the maze of papers? I used to think that was the IPCC's role, but if it ever was, it failed. It looks to me like Cargo Cult science - but I live in hope of meeting someone able to show me that it's not.

However, all that's off topic. I don't know what Spencer's claims for the model are, but I've seen a lot of scientists use a lot of simplified and unrealistic models to help explain some bigger point, and nobody assumes they mean them literally. I don't understand why Spencer fails to get the benefit of the same doubt.

mutually contradictory

mutually contradictory papers? On the basic physics of the Greenhouse Effect? You're making things up now. Name two papers cited by the IPCC on the basics that are contradictory!

I have made numerous attempts here on this website to explain the Greenhouse Effect in simpler but accurate terms. I have linked to other explanations. You may find them useful or not, but it has been attempted. It is inherently complex because radiative transfer has no simple intuitive analog that is any better than the typical blanket/greenhouse/clothing simplification. But it has nothing to do with solving the Navier Stokes equations - those are completely irrelevant to the large-scale energy balance issue, they determine only local motion of the atmosphere which is constrained by real bounds of gravity and energy to be pretty much what it is, to within a few percent most of the time (how often does sea level pressure deviate by more than 10% from 101 kPa?)

For a scientist with some background in physical reasoning, an excellent introduction to the basic physics of the Schwarzschild equation behind the Greenhouse Effect is Ray Pierrehumbert's recent Physics Today article here: http://ptonline.aip.org/journals/doc/PHTOAD-ft/vol_64/iss_1/33_1.shtml - it's short, accurate, and to the point on how the physics works.

My various explanatory posts on the subject are:

Regarding mutually

Regarding mutually contradictory papers, that's not even controversial. As science progresses, new results frequently correct old ones. The problem with referring people to the literature is that having been pointed to a paper, you can't easily know whether it has been subsequently corrected or disputed. Citation indexes help, but are not a panacea. That's why they get experts who "know the literature" to write the reports, and why it takes time to get to know when people are citing papers that have subsequently been put into question - especially if those writing the reports chose not to mention it.

To take an extreme example, the IPCC begins its explanation of the greenhouse effect thus: "Edme Mariotte noted in 1681 that although the Sun’s light and heat easily pass through glass and other transparent materials, heat from other sources (chaleur de feu) does not. The ability to generate an artificial warming of the Earth’s surface was demonstrated in simple greenhouse experiments such as Horace Benedict de Saussure’s experiments in the 1760s using a ‘heliothermometer’ (panes of glass covering a thermometer in a darkened box) to provide an early analogy to the greenhouse effect." (Read on to see how long it takes them to mention that greenhouses don't actually work that way.) Is this mechanism not contradicted by later papers?

I've made attempts to explain the greenhouse effect myself. Yours are pretty good - although I might argue with some of the fine details and emphasis, they're a lot better than most you see, and would be a good basis on which to start a discussion. There is a lot of scope to explain the effect with better analogies than a blanket. But the wrong explanations still abound, and few people on the pro-AGW side seem to see any urgent need to do anything about it. I was also talking about popular presentations, and - as fine as your blog is - it is still comparatively hard to find. And the Pierrehumbert article you link is behind a paywall, which is another pervasive problem.

Actually, the Navier-Stokes equations are *not* irrelevant to the large scale balance - they're just ignored in the simple 1D models that are the starting point. Their effects are already implicit in your description, because you include the effects of convection. But more importantly, the atmosphere is not 1D - there are horizontal transfers of heat due to large scale convection, there are effects on clouds, precipitation, and wind velocity, and both the heat input and the radiated output differ from place to place, land to sea, deserts to forests, summer to winter. The climate is non-linear - the physics of the average is not the average of the physics. The adiabatic lapse rate is not the same everywhere, and the critical condition is not always met. But I was talking about Navier-Stokes in terms of understanding the stochastic behaviour of the global temperature anomaly, which consists of more than just the basic greenhouse effect.

These simple 1D models are excellent for explaining the basic physics - toy models to explore concepts and build intuition - but they are not realistic enough to actually predict the detailed behaviour of the climate. You would not use such a model to determine if the stochastic behaviour of large scale turbulent circulation could cause autocorrelated variations of cloudiness on decadal time-scales.

Given what you've just done to Spencer's model, perhaps I ought to take the time to go through your greenhouse explanations and point out all the things that are unrealistic?

You are misunderstanding

You are misunderstanding several things there.

If you believe the IPCC discussion of the greenhouse effect cites things that are mutually contradictory (you seem to have picked section 1.4.1 to start from) please cite specific text you find contradictory! The discussion of Mariotte is a question on basic energy balance, and is perfectly correct. The text immediately goes to Fourier's realization that "the temperature [of the Earth] can be augmented by the interposition of the atmosphere, because heat in the state of light finds less resistance in penetrating the air, than in repassing into the air when converted into non-luminous heat’" which captures the essentials of the problem. There is no contradiction, merely expansion of understanding. None of the new results cited there overturned anything that came earlier.

Now you are correct that the full peer-reviewed literature does contain many wrong turns and things that are later found to be false. Conveniently, in the history presented in that IPCC section at least, those wrong turns are left out so you wouldn't be led astray. You can find more on those at Spencer Weart's history website - for example Angstrom's wrong claims about saturation, later proved false.

I'm surprised for somebody with a science background that you don't have access to Physics Today. A $20/year AGU membership will get you in, if you don't have it through an academic library. Fortunately Pierrehumbert's article is also available free on the web here:

http://climateclash.com/2011/01/15/g6-infrared-radiation-and-planetary-t...

As to Navier Stokes - yes, you need complex models to theoretically understand the full current state of Earth's climate and how it will change under warming. It's not a simple problem. But to understand the basic greenhouse effect and the concept of radiative forcing, what's driving the changes as atmospheric constituents change, you don't need any of that. Take Earth's atmosphere and climate system generally as it is, with all the distribution of lapse rates, clouds, variation in water vapor, etc. as best we can measure. Then do the radiative transfer calculations, and that tells you what's going on with the energy in the system. No need for Navier Stokes, it's very basic physics.

The discussion of Mariotte

The discussion of Mariotte appears to the uninitiated eye to be the start of a very conventional explanation of the greenhouse effect, it doesn't mention energy balance. It appears to be saying that heat gets in and cannot radiate out, and that a box with a pane of glass over it is an analogy to the greenhouse effect. Except that we know that a pane of IR-transparent material produces almost exactly the same effect. It causes the effect largely by blocking the physical flow of air, and it is of course obvious that CO2 in the air cannot do that. It therefore isn't an analogy to the atmospheric greenhouse effect.

Nor is the greenhouse effect a straightforward matter of "resistance" to the radiative flow of heat, because anyone with an everyday experience of heaters knows that the heater heats air which immediately rises and circulates. It's not going to just sit there in front of the heater getting hotter and hotter because it can't radiate. Convection "short circuits" the "resistance". The temperature is controlled by the limits to convection. That much is reasonably obvious to the layman.

Now you and I know how all this fits together. We know about the lapse rate that you get with a high enough air column under gravity. We know that the surface temperature is the effective radiative temperature plus the lapse rate times the average altitude of emission to space. We know that climate scientists *do* include convective balance in their models, since Manabe and Moller revolutionised things in the 1960s, and that the predictions of the toy radiative model come out badly wrong if you don't include it. We know that in the stratosphere where the lapse rate is negative, the greenhouse effect leads to cooling. We know that different places have different lapse rates - depending on humidity and other factors - and we know that the increased humidity predicted for a warming world results in a shallower lapse rate and hence greater warming at altitude in the tropics - the "tropical hot spot". None of this is especially complicated - certainly not compared to other bits of maths or science. But a layman reading that passage in the IPCC would be left confused and clueless. Presumably, since you left it out, convection doesn't matter compared to radiation (so they will think), but they are given no understanding as to why or how. They have no defence against those wily sceptics who bring the subject up. They are simply taking your word for it.

Nowhere does it tell you that the surface temperature is the effective radiative temperature plus the lapse rate times the average altitude of emission to space. It's a simple equation, that tells you what you need to know, (or find out). Instead, we've got this vague theory of "resistance" and "trapped radiation" and "backradiation", that doesn't actually allow you to calculate anything accurately (and radiative absorption spectra certainly isn't "very basic physics" anyway). Yes, you need to know about it for when the simplifications don't work, but it's not where you should *start*.

You shouldn't assume that I don't have access to 'Physics Today', but you *should* assume that the general public don't. Explanations behind paywalls won't be read. Paywalls pose a serious credibility problem outside academia. The default position is not that all papers are correct unless explicitly proven wrong, it is the reverse. If you want people to believe the theory then you have to explain it, out in the open. If I tell you that I have an excellent rebuttal to your point that you may pay $20 to see, you would surely laugh. (Or I'd be $20 up, whether the rebuttal was valid or not.)

In your last paragraph I think you missed the point I was trying to get at. I agree that as an explanatory device for exploring concepts and building intuition about the greenhouse effect, the basic 1D toy model is excellent. But my point was that this is how I see Spencer's model, too. You spent a considerable time pulling apart details of his simple model, just as I pulled apart some of the issues with 1D models as a way of modelling climate change. The atmosphere is *not* 1D - it is physically unrealistic. But that doesn't matter, because the model isn't the object of the discussion. Pointing out why and where such models are limited is valuable and educational, but I wouldn't read too much significance into it.

Every piece of writing is

Every piece of writing is written by a human attempting to express what they understand to an audience they are envisioning. Even writers with a full and complete understanding of the subject must take into account who they are expecting to read their work. I don't believe the expectation with IPCC reports is that the average reader is someone who is starting from scratch and wants to be able to become a full-fledged climate scientist, learning how to do all the calculations etc.

That's what we have textbooks for, in a developed field like this. Ray Pierrehumbert's recent one is an excellent example, very comprehensive and priced well under $100, which is pretty good for a thick, meaty textbook of that sort. And it gets into the temperature calculation through the whole range of simplified to full-fledged models. Every textbook on climate that I've looked at goes straight to the "lapse rate times average altitude of emission" number - it's pretty fundamental. It's hardly hidden from view. The thing is, the lapse rate piece of that is essentially trivial (it's a stability constraint and, at least for almost all cases, requires no actual knowledge of the detailed motion of the air). The hard part is figuring out what the "average altitude of emission" should be. That's where radiative transfer is central.

You seem to be expecting there to be a universally agreed simple, accurate description of the greenhouse effect in terms that the ordinary person can quickly grasp. But such a thing is rare in science. The AIDS virus kills people - that's simple to express. But how it works? It's a retrovirus, not a regular virus, it attacks the immune system in some subtle way, and fighting it properly requires the detailed knowledge of what it does that's not really available to the ordinary person. We trust medical specialists to understand that sort of thing for us. Rockets work by blasting stuff out the back end, we know that. But a full grasp of the rocket equation and the critical importance of delta-v required vs exhaust speed, thrust vs specific impulse, etc., not to mention the complex stability issues associated with pushing a long thin object from the back - that requires some intensive study, and the relevant intuitions are not simple to acquire. We call people who do that stuff "rocket scientists" for a reason.

Even something as simple as tides - the simple explanations you usually here are wrong because they suggest there should be a high tide on only one side of the Earth (facing the Moon), and don't explain the tide on the other side. Even Neil deGrasse Tyson, who surely understands this stuff completely, gave a somewhat incorrect simplified explanation of the tides when he recently appeared on the Colbert Report (discussing Bill O'Reilly's tidal proof of God). You have to understand that for tides the relevant thing is the gradient of the gravitational force, not the total force - that's not something that's intuitively obvious, and we have no simple analogy of force gradients in everyday life that I can think of.

Science is hard. Climate science is hard. Blaming the scientists for this is a bit counterproductive.

As to Roy Spencer - when you propose something that appears to contradict decades of scientific work, the burden of proof is high. You have to work hard, like all the scientists who came before you. Roy seems to have skipped the hard analysis of his own work that's required. I provided some of that here (and Barry Bickmore much more - I suggest you read his commentary on the book since he's the one who actually read it).

"You seem to be expecting

"You seem to be expecting there to be a universally agreed simple, accurate description of the greenhouse effect in terms that the ordinary person can quickly grasp."

I'm expecting descriptions to be accurate, yes. And as both you and I have been able to compose far more accurate descriptions in simple, everyday language, it is a bit of a mystery why the professionals so often fail to manage it too.

However, I have no objection when something is genuinely too complicated to explain if they give an oversimplified 'wrong' version of the story - so long as it is made clear that the real story is actually more complicated. But I do get tired when I try to explain to people that rocket science consists of more than turning it so the pointy end is 'up' and lighting the blue touchpaper; when I get insulted for "denying" it, and having no understanding of basic physics.

I don't blame climate scientists for climate science being hard, but I do blame them for telling people it's simple and certain, and I do blame them for making it harder than it needs to be, by ...

[... Moderated - parroting of denier speculation on scientific conspiracy removed ...]

That's enough NIV. You

That's enough NIV. You complained that I wasted so much time on Spencer's model; I've wasted even more responding to you. Sorry.

apsmith: Good move. One

apsmith:

Good move.

One tactic of deniers seems to be to eat up the opposition's cycles. Either the scientist keeps responding and wasting time or the denier is given the appearance of having "won" the argument.

The tip off here that the denier is not just a reasonable person with some scientific training and a desire to play devil's advocate is the endless and increasingly absurd nature of the questioning and the disparity between their willingness to dig deeper and deeper into increasingly trivial points while mysteriously missing completely the overall context that render those trivialities trivial.

> while mysteriously missing

> while mysteriously missing completely the overall context that render those trivialities trivial.

I view it as a fundamental inability to grasp the concepts of relative importance and of emergent properties. Like people with eidetic memories not being able to forget the unimportant stuff.
(if this is indeed what happens)

Might be interesting to administer an abstract reasoning test....

(but - unlike some - I think I could be wrong)