Physics and its mathematical abstractions

I've been discussing in some detail here a mathematical model of the response of Earth's climate to radiative forcings, trying to address some of the concerns expressed elsewhere on the need for such a model to be "physically realistic". In the case of the two-box model, a given fit of the response function to a two-time-constant decay curve could come from one of many different underlying physical models that correspond to a partitioning of Earth's climate system into two parts with different response rates. So the question has been whether any of these possible underlying physical models are in some way "realistic" or not. That essentially reduces to criteria on the magnitude of the various constants and partial outcomes in the model relative to real components of our planet. So far there seem to be some parameter values that look realistic, although there's a limit on what portions of Earth's climate system the different components can be related to.

I'll return to one more "physical realism" criterion for the two-box model of our climate in a later post, but there's a more fundamental question in all this. Every such model is an approximation to the real system in some form or other. Being an approximation, that means there will always be some properties associated with the model that are indeed different from reality. So what can you trust from the application of such an approximation? What meaningful results can be extracted, and why?

This is a very common issue in physics, where models abound at every level of the subject. The essence of physical theory is the creation of simplified mathematical models that approximate reality to some degree. The astounding thing about our world is how well reality conforms to the best of those mathematical models (Newton's gravity, Einstein's relativity, quantum electrodynamics) with observations agreeing with the theoretical models to parts in a trillion in some examples. Those theories are tremendously well corroborated by such predictions, so that reality and the representation by a mathematical model surely are closely connected. But still, there is no way to prove the model is "real". In fact the evolution of such models (Newton's gravity to Einstein's General Relativity) or conflicts between them (GR and quantum theory) are generally associated with changes or conflicts in perspective about reality itself. For example, relativity requires denying the reality of uniform time, something unquestioned in Newton's world perspective.

The reality of even such fundamental a thing as points in spacetime is questionable - we cannot experience the infinite in our reality, and yet a mathematical point is infinitesimal. The surprising complexity (and outstanding questions such as the continuum hypothesis) of pure mathematical definitions of the "real" numbers we use in physically modeling our universe also suggest that reality is likely, underlying it all, to be based on something very different that just looks like the mathematical models we've come up with so far.

This relationship between reality and our mathematical models, idealizations, or "abstractions" was the subject of my old colleague David Mermin's recent column in Physics Today: "What's Bad About This Habit". This month's issue has more than 10 letters commenting on Mermin's column (along with Mermin's response), an unusual display of interest for that magazine. A key point from Mermin's response is the following:

... nature is neither paradoxical nor contradictory. Paradoxes and contradictions arise from our defective understanding of nature. What I advocate is that in trying to improve that understanding we keep in mind, among the possible resolutions of an apparent paradox or contradiction, an unacknowledged and inappropriate reification of an abstraction.

Down in the trenches of modern science, when I was working on electronic structure calculations for materials in the early 1990s, we were quite aware of at least some of our abstractions and their unreality. To simplify calculations enough to get practical results for real materials a large number of approximations are essential. The silicon atom has 14 electrons, for example, but for low-energy processes the central 10 electrons of the core don't participate. To a very good approximation they just stay in their lowest energy atomic states even when neighboring atoms are just tenths of a nanometer apart. So for our calculations we would replace the "nucleus + 14 electrons" of the real atom by a "pseudonucleus + 4 electrons", with a "pseudopotential" replacing the original nuclear potential and accounting for the various interactions between core and valence electrons. But that meant that when you looked at the density of electrons that came out of our calculations, you had to remember that the abstraction was missing core electrons, so those numbers could not be correct unless you were outside the core distance from the nucleus.

More significantly, the process we used for solving these systems used the "Kohn-Sham" density-functional theory, under which the real multi-electron quantum wavefunction is replaced by a product of fictitious single-electron wavefunctions, with a density-dependent potential in addition to the nuclear pseudopotentials. The Kohn-Sham approach is a mathematically rigorous process (in fact, Mermin, along with Hohenberg and Kohn, was one of those responsible for proving that) for getting that final electron density and the associated total energy, as long as the associated potentials are what they should be. The toughest piece is the so-called exchange-correlation potential associated with electron-electron interchange and interaction, but there seem to be good approximations for that. We knew thanks to that piece of mathematical logic that the approximations we were using would give realistic numbers for comparing total energy and density between different arrangements of the atoms, and generally it worked well for that.

But the other output of these calculations were those single-electron wavefunctions, with their characteristic position-dependence and energy. We knew these were a fictitious underlying construct, not representing the real multi-electron wavefunction of the system. And yet it was always tempting to look at those in more detail, to derive the resulting single-particle energy bands and then compare them with observed electronic properties of these systems. Far more often than not the observations matched very well - enough that for many of us working on these systems those fictitious wavefunctions began to seem like they were representations of something real, though it wasn't entirely clear (to me at least) what.

That is the kind of place where physics goes beyond logic and mathematical rigor - pushing abstractions as if they were real to see what they imply and how far you can go. It's a good thing to do - at the least it may allow you to calculate properties of the world in ways that you didn't know about before - and it can lead to many advances in our understanding of nature. But it's critical to remember (and easier to do in this sort of case than in more fundamental areas of physics) that reality and the abstractions are still two different things. The abstractions are a representation of reality, and to the degree they work, nature's behavior conforms to the way we think of those abstractions. But those abstractions are not usually, perhaps never, real. Progress in science depends on retaining at least a smidgen of skepticism about even the most apparently real things we think we have discovered about the world. As Mermin quoted Einstein on his key insight in coming up with relativity - "At last it came to me that time was suspect."

Paul Krugman's recent article on the problems with economic theory is analogous, but the problems are even worse in that case. Economists have had "physics envy" and focused for decades on idealized mathematical representations of economic systems and financial products. To some degree, even for some long stretches of time, these models work, and seem corroborated. But it is essential not to forget that they are still an abstraction, an idealization. And abstracting and idealizing the behavior of people, as all the social sciences including economics must do, is considerably more challenging than for the timeless components of the natural world that physics addresses. The real world sometimes can behave dramatically differently, as has happened with the latest recession.

That doesn't mean that we know nothing. In the Kohn-Sham case, as with Newton's laws, if application is where the abstractions are generally valid and if the system we're looking at doesn't have something unique about it that might introduce additional complexity, reality does conform to these mathematical models, to a good approximation. You can confidently make accurate predictions or explain observations. For those components of a theory in which you have mathematical rigor, such as density and energy in the Kohn-Sham case, discrepancies between theory and observations should be small, or where large are good indicators of breakdown in one of your approximations or assumptions. Where that rigorous relationship is lacking, as for single-particle band structure calculations, often they may still correspond to reality, but if they don't it doesn't really imply much more.

The logic of climate response
Looking at the issue of Earth's climate, one central scientific question has been how much Earth's average surface temperature will rise as we add greenhouse gases to the atmosphere. The temperature increase after doubling atmospheric CO2 is known as the "sensitivity"; the most recent IPCC report (working group 1 of the 4th annual review - AR4WG1) concluded that long-term sensitivity is very likely between 2 and 4.5 degrees C for a doubling of CO2, with a best estimate of 3.

The effect of CO2 and other long-lived greenhouse gases can be quantified to within around 10%; the energy flow through the atmosphere changes so that initially the troposphere (lower atmosphere) retains some number of additional watts in every square meter. This change in flow into the troposphere is considered a "forcing", i.e. a causative agent for the warming and other responses. Doubling CO2 results in a forcing of about 3.7 W/m^2 (the effect is close to logarithmic for CO2, so it doesn't matter much exactly what the initial concentration was before it was doubled).

That number is relatively small compared to the total average input from the Sun (about 341 W/m^2) or the total flow through the troposphere of around 500 W/m^2. The general expectation is that relatively small changes of that sort should produce a response that is close to linear in the forcing, barring special circumstances (the climate being in a singular point of some sort). So a sensitivity of 3 degrees C per doubling of CO2 implies a long-term linear response of Earth's average surface temperature of about 0.8 degrees C per W/m^2 of forcing. The 2 to 4.5 degree range of the IPCC implies a response of 0.54 to 1.2 degrees C per W/m^2.

However, note this is a long-term response. IPCC (Chapter 10 of AR4 WG1 report, p. 749) also notes a lower expected "transient response" value of 1 to 3 degrees C (about 0.3 to 0.8 C per W/m^2) for doubling of CO2 under a scenario where the CO2 levels are increasing at 1% per year, i.e. if the doubling takes about 70 years. The difference between the trasient and long-term response numbers implies a significant fraction of the warming is determined by forcings that were applied many years, even decades, in the past. So there is a significant delay before the full effect of a given forcing level is felt - "warming in the pipeline" as it is sometimes called.

This transient response at a given point in time under conditions where the forcing is changing with time should still be close to a linear function of those forcings, but the dependency is not just on the present forcing value, but on historical values as well:

T(t) = ∫-∞t F(s) r(t-s) ds

where T(t) is average (climatic) temperature at time t, F(t) is the forcing at time t, and r(δ t) is the response function relating present temperature to the forcing a time interval δ t in the past. Under steady-state conditions where F(t) is a constant, the long-term response number is simply the total integral of the response function:

long-term-response = ∫0 r(t) dt

So the fundamental question of the Earth's long-term response to greenhouse gases or other forcings translates to a purely mathematical question about this forcing function r for our planet. The more we can learn about r(t), the more definitive the statements we can make about long-term response and sensitivity of the climate.

Given the nature of heat, it seems unlikely that r(t) could ever be negative - that would imply cooling of the surface in response to an input of heat in the troposphere a time t in the past. We do expect r(t) to be quite large for short time periods (1 year or less) because there is certainly a significant short-term response. Should it drop monotonically from that point? A rise in r(t) after some number of years would mean that forcings that number of years ago were more important than more recent forcings in affecting the current temperature. That seems unlikely, but I don't believe it's physically impossible, and I'll show below some evidence for that in the observational data.

Assuming that r(t) has to be positive or zero everywhere already does tell us something. If we only have enough data to get a good approximation for r(t) for t less than some limit tmax, then we effectively only know the integral up to tmax. The full long-term response presumably includes positive values for r(t) beyond that point, and so the long-term response is necessarily always either equal to or larger than the fitted approximation tells us. I.e. a model for r(t) known up to tmax provides only a lower bound on the long-term sensitivity.

Tamino's original post on this looked at two possible mathematical models for r(t) and fitted them by comparing various different forcing curves to the 20th century temperature record. The first, "regression" model is simply r(t) = A δ(t) - an instantaneous response, with no dependence on forcings at earlier points in time (r(t) = 0 for t > 0). The long-term response here is the same as the instantaneous response, A.

Tamino's second model was the two-box model:

r(t) = a2 exp(-t/τ+)/τ+ + a3 exp(-t/τ-)/τ-

in the notation I've been using in previous posts on this. He specified the two time constants (at 30 years and 1 year) and then fitted the coefficients. He also briefly justified the time constants as referring to "ocean" and "atmosphere" components of Earth's climate system, but clearly didn't look into the detailed properties of boxes that would have been required for that correspondence, as has been the subject of my previous posts on the two-box model.

Thinking of all this as simply mathematically modeling the response function as an abstraction with meaning in itself, without worrying about the specific meaning of components, suggests looking at the resulting values for r(t) and the possibility of other approaches to extracting that function from the observational and forcing data.

resp_vs_time1.png
Figure 1
Fitted response functions for the two-box model (two exponentials) with short time-constant set to 1 year and long-time constant varying from 5 years to almost 100 years.

Following the fitting approach of my original post on this, and varying the long time-constant as was done in the second and third figures there, the above graph shows the associated fitted functions r(t). There is some resemblance between the curves, but the changing exponential decay rates force them apart. The regions of densest lines seem most strongly constrained by the data, for t from about 1 to 4 years, and then later from about 10 to 50 years.

I've run the same process with shorter short time constants, and the value for r(t) at 1 year is consistently very close to 0.06 K per W/m^2 per yr. The densest values for t = 2 to 4 drop with the shorter time constants, so they are not as constrained as that first number; the dense values from 10-50 years remain almost unchanged in the range 0.02 to 0.01 in these units. The total response (integral under these curves) is given in the third figure of the above post, rising slowly as the long time constant increases.

What about other functions? Since heat moves diffusively the rate of transport for a fixed temperature difference in a one-dimensional problem declines with the inverse square root of time. This suggests a power law may be more appropriate than an exponential response - or we can combine them. Figure 2 shows fitted values for

r(t) = a exp(-t/τ) + b t^α

with τ fixed at 1 year (this should match the near-immediate response reasonably) and the power law α varied from -0.25 (1 over the fourth root) to -1.7.

resp_pwr1.png
Figure 2
Fitted response functions for a two-box model where the second box has power-law behavior, with the first box time-constant set to 1 year and the power law varied from -0.25 to -1.7.

For more negative powers the fitted value of a is negative, and the r(t) curves start to go negative at t = 2 years when α < -1.65. For all the plotted curves the fit to temperature was quite reasonable, with an R2 value of 0.79 to 0.82 (best fit was for α = -0.8). Interestingly here the dense curve sections seem to indicate non-monotonic behavior with a peak in the response curve from 3 to 5 years. The behavior in the range 10 to 50 years of about 0.01 to 0.02 in these units seems to be roughly the same as for the two-box model, despite the much slower long-time-scale decay in this power-law case.

Rather than trying to find the right functional form for Earth's response curve, we can try to extract what it looks like directly from the temperature and forcing data. For this purpose I've also added the El Nino (southern oscillation index - SOI) as an additional fitting term since it is the strongest known effect on global average temperatures other than the forcings. R code for the following image is available here (code for all images in this and related posts is available just by changing the 'png' extension to 'R' in the URL), with the SOI index downloaded from UCAR and trimmed to the same 1880-2003 dataset we've been using for the other time series.

Since the fit is to annual average values, the terms in a direct fit of the response function are simply linear combinations of forcing in the current year and each year preceding, so the temperature in year n would be:

Tn = c + b SOIn + a1 Fn + a2Fn-1 + a3Fn-2 + ...

and the values ai are an approximation to the ideal response function r for times between i-1 and i years.

resp_var50.png
Figure 3
Direct fit of temperature to time-shifted forcings plus El Nino SOI, showing the coefficients ai in the fit. The different colors show increasing numbers of terms used in the fit.

These fits seem to be reasonably stable as more terms are added. Unfortunately they are also giving negative values for r(t) for 6 to 7 years and intermittently at longer times. None of the negative values are actually statistically significant in the fit, however, nor are most of the positive values for t above 5 years. Exceptions are the peaks at just over 20 and just over 40 years. The peak around 4 to 5 years also seems to be real here, similar to the early peak suggested by the power law fits.

The trouble with this parametrization (aside from the very large number of parameters being used to fit the curve!) is that we really do expect the r(t) curve to be reasonably smooth over long time periods, and not spiky in this fashion. There shouldn't be much difference between the heat pulse from a forcing felt 40 years ago and one felt 41 or 42 years ago, but this particular fit implies there would be.

I've tried one more parametrization, a rather ad hoc collection of Gaussian terms of increasing width centered on increasing time offsets (1, 1.4, 1.96, ... years). The resulting coefficients are almost certainly meaningless in themselves, but the overall curve resembles what we've been seeing in the other examples, with (forced) smoother behavior at longer times.

resp_gaussian_1.4.png
Figure 4
Fit of temperature to a sum of shifted Gaussian-convoluted forcings plus El Nino SOI, showing the resulting response function in the fit.

This looks a little more reasonable than the direct fit (although it still goes slightly negative in places), and yet shows roughly the same peaks at around 5 years, around 10 years, and above 20 years. The really long-term behavior is certainly not constrained well by the data, but integrating the response to 50 years here gives a lower bound of 0.48 C per W/m^2 on the response (integrating the full curve shown gives about 0.73 C per W/m^2, so the long-term value of the response function, while small, gives a significant contribution to the total long-term response).

If this peaked curve with a long tail is anywhere close to a realistic portrayal of Earth's real response function, then the two-box model with a fit given by a sum of 2 exponentials (or even a 3 or more-box model) is not going to be capable of capturing the full response behavior. Both the peaks and the long tail are of interest; the peaks may be well enough constrained by 20th century observational data that a more thorough analysis could tell us more about them and what they might mean. The long tail can only be better known from longer observational records - possibly paleoclimate ice core data could tell us more, if reasonable models for forcings can be included.

Finally just as a cross-check that these fits are doing something reasonable, here's the fitted temperature curve for the last case, fitting temperature as a multiple of SOI and products of the forcings by these shifted Gaussian offsets.

temp_gaussian_1.4.png
Figure 5
The temperature curve resulting from our "sum of Gaussians" fit. Visually the match to observed temperatures is pretty good.

The utility of the "two-box model" does not lie in its purported division of the Earth into two boxes, but rather in its attempt to model the mathematical relationship between Earth's climatic temperature response and the changes in radiative energy flux that are believed to cause those changes. There are clearly other ways to look at that modeling, some of which may be much more meaningful. In both cases if the approximation to Earth's actual response function is not far off up to some reasonable limit on the time interval, we get a real approximation to a lower bound for Earth's long-term sensitivity, and that value should be reliable and independent of whatever the original motivation was for the approximation.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The interesting thing about

The interesting thing about this post, is that a wide range of fits seem to give a simmilar impulse response. This gives me confidence that a reasonable estimate of the impulse response should be obtainable for about the first five years of the impulse response function.