Blinding them with math

There exists a widely quoted story about [18th century philosopher/mathematicians] [Denis] Diderot and [Leonhard] Euler according to which Euler, in a public debate in St. Petersburg, succeeded in embarrassing the freethinking Diderot by claiming to possess an algebraic demonstration of the existence of God: "Sir, (a+b^n)/n = x; hence God exists, answer please!"

This story turns out to be (at least in detail) false, but it was likely invented and resonated because it embodies an underlying truth almost any of us in the sciences have seen: once a mathematical equation comes out, it tends to blind the naive, and even the experienced will often skip over the equations on a first reading of any complex argument. A minor error in a mathematical expression (a forgotten minus-sign being the most common example!) can completely change its meaning, and reasoning about such things requires detailed understanding, it's something that's intellectually demanding, requiring time and mental effort. Sometimes we are willing to put that time in, but more often than not we just don't have the time, or the requisite background, and just skip over the math, hoping that it makes sense to somebody else.

Of course, there is an equation that's proof of amazingly beautiful self-consistency in mathematics, that some have taken as evidence for God:

ei π + 1 = 0

but the beauty of that expression isn't something I intend to get into right now.

What brought to my mind the apocryphal story about Euler and Diderot was a pair of recent posts by Dr. Judith Curry, who I've criticized here before. The first post seemed in some ways to finally be a response to my earlier queries about the no-feedbacks question - about which more below. But in the second she oddly chose to highlight 3 comments which claimed the whole thing was ill-defined, with one of them chock-full of equations that seems to have blinded her and others to the fact it made no more sense than Euler's apocryphal equation, ending with a claim that it's all nonsense:

... it is impossible to evaluate these 2 integrals because they necessitate the knowledge of the surface temperature field which is precisely the unknown we want to identify.
The parameter dTa/dFa is a nonsense

which is the sort of language that should remind my few regular readers of our friends Gerlich and Tscheuschner...

The fact is, the no-feedbacks equilibrium sensitivity (Planck response) is perfectly rigorously defined as a theoretical construct, as I quoted from the relevant paper in my previous post on this:

There is a standard definition of the response "without feedbacks", referenced by the IPCC which is provided by Bony et al (Journal of Climate 19:3445 (2006)), appendix A. It is a physical quantity defined by the atmospheric temperature profile and radiative properties:

The most fundamental feedback in the climate system is the temperature dependence of LW emission through the Stefan-Boltzmann law of blackbody emission (Planck response). For this reason, the surface temperature response of the climate system is often compared to the response that would be obtained [...] if the temperature was the only variable to respond to the radiative forcing, and if the temperature change was horizontally and vertically uniform [...]

I.e. the response without feedbacks is quite strictly defined as the response under which "temperature [is] the only variable" that changes, and it changes by the same amount throughout the surface and troposphere (changes in temperature gradient, by contrast, are referred to as the lapse rate feedback).

This is in some ways an arbitrary, theoretical, practically meaningless definition. In reality, many things besides temperature will change when we change atmospheric constituents or the sunlight our planet receives, and the temperature change will not be uniform either vertically or horizontally. Most of these changes (increased water vapor, melting ice, higher rates of latent heat transport) are associated with a causal chain involving heating and surface temperature increase, so it makes some sense to talk of them as "feedbacks" on the bare temperature increase. On the other hand, the above definition refers to equilibrium responses and is calculated based on an assumption that the planet has returned to radiative balance under the new forcing. This may take a very long time - hundreds of years, given the large heat capacity of the oceans.

So the theoretical no-feedbacks sensitivity definition has the rather odd property that it does not happen "first", and the feedbacks happen later. Rather the no-feedbacks response and the full-feedback response components are happening simultaneously, and both reach their full equilibrium state only after hundreds of years or longer. As briefly discussed in this earlier post, there is a real time-delay in Earth's actual response with both fast and slow response components (and probably a continuum of time scales in reality) making the time-dependence somewhat complex.

That makes the no-feedbacks number a rather non-intuitive quantity, and perhaps is responsible for much of the confusion that Curry's posts seem to have engendered. I'm going to try to be as charitable as I can here in trying to understand what she and others commenting on the matter were trying to get at.

First, it's clear that Dr. Curry hasn't spent a lot of time on the problem lately - as her original post noted, she allotted herself 1 day to become familiar with both the calculation of the value of radiative forcing from doubled CO2, and the "no-feedbacks" response. She seems to have done a nice job of summarizing the definition of radiative forcing and how it is determined. The problems start with her claims about the surface-temperature response. Not even getting into the no-feedbacks question, she starts with the following odd gripe about IPCC's very plain definition of linearized sensitivity and response:

According to this simple model that relates radiative forcing at the tropopause to a surface temperature change, there is an equilibrium relationship between these two variables. The physical relationship between these two variables requires many many assumptions, including zero heat capacity of the surface and a convective link between the surface and the tropopause.

But this claim that it "requires many many assumptions" is simply false. She provides no reference for her claim on this, and in fact the only assumptions the IPCC's linearized equilibrium response formulation requires is that (A) the radiative forcing is relatively small and (B) Earth's climate is not at any special "tipping point" where small forcings can cause large (nonlinear) responses (see Bony et al (Journal of Climate 19:3445 (2006))). Under those conditions, *every* response to forcing (global average temperature, regional average temperatures, average water vapor concentrations, etc etc) will be a linear multiple of that forcing. This is standard perturbation theory; essentially taking a Taylor expansion of all the response functions (under the assumption (B) that such a response function exists and has a finite first derivative) and retaining only the linear term (ok under assumption (A) that the forcing is small).

But here's the real question - why did the assumption of "zero heat capacity of the surface" pop up here? Heat capacity has absolutely no bearing on equilibrium response at all - the "equilibrium" state is the one after all the heating has been done and there is no continued growth of energy storage in the system. Where heat capacity matters is not in the equilibrium response, but in the time it takes to reach equilibrium, and in the magnitude of the "fast" response to a radiative imbalance.

So perhaps the issue here is that Dr. Curry's intuition is oriented not towards the equilibrium response issues that the IPCC definitions use, but to the short-term or immediate responses of the climate system to radiative changes. So in fact she is not accepting (or understanding) the IPCC definition as stated above, but rather looking at some sort of short-term change problem. This seems to be substantiated by what seems to be an alternative definition of "no-feedbacks sensitivity" using surface fluxes proposed toward the end of her post:

1. Compute the surface radiative forcing and its amplification by the atmospheric warming in a manner following Myhre and Stordal 1997, using gridded global fields of of the input variables obtained from observations (e.g. the ECMWF reanalysis, ISCCP clouds, satellite ozone, some sort of aerosol optical depth from satellite. Conduct the calculations daily over two different annual cycles (say 1 El Nino and 1 La Nina year). These two different years provide an estimate of the uncertainty in the sensitivity associated with the base state of the atmosphere. Note, each annual forcing dataset will need to be run repetitively for maybe up to a decade to get equilibrium for the ocean and sea ice models. A grid resolution of 2.5 degrees should be fine.

2. Use the calculated fluxes to force the surface component of a climate model (without the atmosphere), including the ocean, sea ice, and land subsystem models, for the baseline (preindustrial) and the doubled CO2 forcing. Conduct two calculations for both the baseline and perturbed cases:

1. keep the the (turbulent) sensible and latent fluxes for the perturbed case the same as for the baseline case
2. determine the perturbed surface temperatures by calculating the turbulent sensible and latent heat fluxes using the perturbed surface temperatures

Note, these two different ways of treating the sensible and latent heat fluxes tell you different things about sensitivity (without allowing the evaporative flux in #2 to change the radiative flux).

To the extent I can make heads or tails of this, I'll note a few things that distinguish this in rather substantive ways from the IPCC no-feedbacks definition:

* IPCC no-feedbacks response is a simple, purely radiative calculation (for a given composition/temperature model of the atmosphere), with no need to calculate the rather complex ocean changes and convective fluxes Curry proposes here. Its simplicity is a big part of its usefulness; a hard-to-calculate number doesn't provide a very easy-to-use reference point.

* As far as I can tell, what Curry is proposing is a non-equilibrium, short-term surface temperature response. The output of #2 would be a graph of temperature increasing over time, not a single constant temperature (unless there are additional components of this proposal left out of her definition here).

* Again as far as I can tell, Curry's proposal is surface-focused, and does nothing to assess the net energy balance of the planet as a whole. That is probably fine for short-term temperature response, but over the long term the increased outgoing radiation from a warmed atmosphere is what stops the planet from continuing to gain more and more energy and keep increasing in temperature, so that has to be taken into account somehow. I don't see how that happens in Curry's formulation - unless there's something else missing there.

So what Dr. Curry is proposing seems to be a study of short-term immediate surface response to forcings. This could be another valid measure of the sensitivity of our planet to greenhouse gas changes, though somewhat harder to evaluate than the IPCC no-feedbacks response. In reality I think what she's looking for is the full time-dependent response of surface temperature to a radiative forcing, from the short-term initial piece that her proposal might uncover, through to the century-scale equilibrium Planck or full response.

This could be an interesting question to pursue. But it's definitely not what's normally understood as the "no-feedbacks" term. I believe it's useful to have clarity in names and definitions; what Dr. Curry proposes is best called a time-dependent surface response. I hope to post some more thoughts on that particular quantity in future.


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Thank you so much for this.

Thank you so much for this. When I read that one I *knew* it was wrong, but my lack of technical expertise led me down quite a few tortuous paths - all verbose, all unproductive.

This is an elegant, simple explanation. Thank you.

I'm a layperson, so go easy

I'm a layperson, so go easy on me. I've tried to follow the exchanges on this subject on curryja. I noted that Lindzen was asked about this by one of the commenters on curryja, and the response he supposedly gave indicates to me he has no problem with this at all. Roy Spencer also has a statement about this 1c with no feedbacks on his blog that seems to me to indicate he has no problems with this. At one point in the conversation the commenters who appear to know something seemed to saying this number is model generated and then never used, whatever that means. But later there was an exchange where an apparently knowledgeable commenter affirmed a description of climate sensitivity that included the 1.2C with no-feedback number in the math for the estimate of sensitivity.

So, if a scientist says he thinks sensitivity is 3C, is the 1 to 1.2C with no feedbacks in the 3C number? If not, what is it specifically used to do?

I really am amazed by your temperature predictions, and can't wait for the final numbers for 2010. Did you do that with climate models, or a pad and pencil?

JCH - that's a good question

JCH - that's a good question on whether the "1 to 1.2C with no feedbacks [is] in the 3C number", but I'm really not sure how to answer it. I think the best way to understand it is as follows:

Typically in science experiments in the lab, when you're trying to understand some behavior of a physical system, every effort of the experimenter goes into controlling all the controllable variables except the one thing you're interested in. For example, when I was an undergraduate I helped with some experiments on scattering of laser light by a liquid/gas system near its critical point. Using a laser means you have things at a fixed light frequency, and we used a continuous laser with a pretty constant light output level. The geometry of the incoming light and the detector was fixed, so there was no angle that changed. We looked at a gas of a specific molecular composition and in a fixed volume container. The two remaining variables were the density (quantity of gas in the container) and the temperature. Putting a given amount of gas into the container fixed the density, so that left the temperature as our free variable. However, changing the temperature introduces a second variable, the rate of that change. We did a series of runs with the temperature changed at different rates to be sure the signal we were seeing was really representative of the material at that precise temperature.

Climate science is mostly not done in the lab, but in the real outdoor world, where there are many different things changing at the same time. That makes it very tricky to attribute cause and effect, to understand the relative importance of one thing over another. That's where theoretical models of the climate system come in. Whether simple models you can work out with pencil and paper or complex ones requiring computers, these allow you to freeze all but the one variable you're interested in, and see what the impact is.

The "no feedbacks" 1.15 C number is the result of taking such a theoretical model of the climate and freezing everything except the temperature, and allowing that only to change in a uniform manner. It's a simple calculation that does reveal the direct impact of doubling CO2 or equivalent changes in radiative forcing. So it has meaning in that sense, but really only in that sense, that "if you freeze everything but temperature", that's the change you would see. As you note, Lindzen and Spencer agree on this number as well. It's a reference number. Then you run climate models, you look at the real climate now, you look at what we can gather from histories of past climate, and you see that the real number looks an awful lot closer to 3 C than 1 C. So this direct effect of freezing everything but temperature gives you only a small portion of the change that doubling CO2 causes.

It's that discrepancy that gets scientists interested in looking at the causes, the "feedbacks". So climate modelers freeze everything but temperature and water vapor, or lapse rate, or clouds, and see how big that portion of the response is compared to only allowing temperature to change. The no-feedbacks number gives you a base reference for the magnitude of these other effects, which together should explain the magnitude of the whole 3 C number. But the 3C is an integral value obtained from all those different sources. It's not calculated from the 1.15 C plus feedbacks, that's just the way people try to explain how large it is.

On the temperature predictions - it was a very rough calculation based on my estimates of the solar cycle and a touch of El Nino variability. I suspect I'll not do so well next year, but maybe. We'll see...

I wonder if you've emailed

I wonder if you've emailed this to Curry (assuming you don't want to get bogged down participating in her blog).

We certainly agree, I

We certainly agree, I basically said the same thing over on the Air Vent referring to your arXiv paper.
Thomas Milanovic gets it very wrong.

1. The temperatures, emissivities and albedo used in these simple calculations are effective radiative ones, as in “effective radiative temperature”, Teff e.g. the temperature that would be calculated for an isotropic, isothermal earth in radiative balance with the solar input. Arthur Smith shows how this is done. TM claims that this cannot be done, but, in fact it is both definable and these effective parameters can and have been measured.

2. Hoelder’s inequality shows that the temperature calculated from an area average over the surface, Tave would be less than the Teff.

3. In the simplest models of the greenhouse effect Teff (no atm) is calculated for an Earth without an atmosphere, and compared to an observed Tave. The 33K difference is a lower limit to this difference because Tave (no atm) must be lower than Teff (no atm).

4. Smith shows how, you can add complication, a rotating earth, a variation of T(X,t) with latitude, etc, and find Tave(no atm). You can, of course, also add convection, and more complicated models do this, so the question is what do you gain in understanding at what cost.

The S-B temperature used to describe the Earth with and without the greenhouse effect in the simplest models is calculated from balancing the solar energy flux flowing in and the IR energy being emitted. These are definable quantities put together for use in a simple model which can, and have been measured, mostly from satellites to a fair degree of accuracy. Oh yeah, Eli thinks the temperature field is continuous, at least as far as fluid dynamics goes.

happy new year!


DeWitt and others made some

DeWitt and others made some valiant attempts at reason over there. However, what disturbs me most is comment #15 from David in London - "[arguing over equations] tells me the only thing I need to know: The science is clearly not settled." - wonderful. Doesn't matter if one guy's so-called equations are a bunch of nonsense of course... So how do we tackle that?

"Thomas Milanovic gets it

"Thomas Milanovic gets it very wrong. ..."

Thomas suggests none of you climate scientists have read this, and that all of you should:

I've read it now. Seems to be

I've read it now. Seems to be a pretty humdrum account of an example of chaotic dynamics, nothing particularly new. The authors have published half a dozen similar articles in other journals. The system investigated is one of coupled oscillators, so clearly not directly applicable to the climate issue, and I can't see any indirect link since oscillators have bounded dynamics, while Earth's climate system does not (at least not below temperatures of about 5000 degrees due to entropic considerations), being in a steady state governed by incoming and outgoing energy flows.

AS - Tomas, not Thomas (my

AS - Tomas, not Thomas (my bad,) has posted an article on Climate Etc.:

"...There are then scientists who know what is chaos and really understand it. I’d put that category at 1% and much less for the climate scientists. ..." - Tomas Milanovic

Thought you might be interested.

Yup, I'll probably comment

Yup, I'll probably comment over there I guess...