My last few posts have been on some examples of dubious scientific publications. Some publishers are bad actors. Some authors are naively over-confident and have found naive editors or publishers to match. Sometimes it can be hard to tell. The last case I'm going to look at here is perhaps the worst situation - where the authors are clearly behaving badly, and somehow made it through some form of peer review. This is the sort of thing that gets reported regularly on Retraction Watch, and also similar to the Gerlich and Tscheuschner case except that the journal in question is slightly more prestigious (G&T's journal, IJMP-B, has an impact factor of less than 0.5). And once again the topic is climate change.
The paper this time is Why models run hot: results from an irreducibly simple climate model by Christopher Monckton, Willie W.-H. Soon, David R. Legates, and William M. Briggs, published in Science Bulletin by Science China Press and Springer-Verlag. Given that Springer is about to merge with Nature, the break-down of reasonable peer review in this case indirectly reflects badly on one of the most prestigious journal brands in all of Science (Springer is of course also highly regarded).
Monckton and friends' paper has been widely criticized already by ... and Then There's Physics, Jan Perlwitz in two articles and from Roz Pidcock at the Carbon Brief who quotes various other scientists on the topic. Since the essential argument is barely changed from Monckton's 2008 Physics & Society (P&S) article that I found full of errors I thought it deserved a bit of post-publication attention from me also. It really is astonishing that this work was approved by an editor for what looks like a reasonable scientific journal.
At first sight this article isn't as obviously nutty as some of those I've discussed here previously - the graphics and tables seem to be well designed, the reference section looks fairly substantive. The mathematics is once again pure algebra with not a sign of an understanding of the calculus invented by Newton and Leibniz a few hundred years back - and we'll get back to that. But other than the overly simplistic math, the paper may not strike the experienced editor immediately as absurd.
In my last couple of posts I've been sharing some observations on the state of science communication these days, particularly problems with predatory publishers, overzealous publicity for ambiguous research, and the difficulty for an outsider in understanding what to trust in what's published in science these days. Bottom line: it's become much easier to communicate, but harder to know whether what's being communicated is worth paying attention to.
In this post I present two more cases of dubious publication - as these are in physics I'm rather more certain they are wrong. These have received a rather mixed collection of "post publication peer review" but in a fashion that I believe would leave the non-expert quite unaware they are not useful contributions to science.
Our first case study here is from the mega-journal PLOS ONE - "Implications of an absolute simultaneity theory for cosmology and universe acceleration" by Edward T. Kipreos, of the University of Georgia. PLOS ONE is an open access journal started in 2006 (2012 impact factor: 3.73); it is the world's largest journal by number of papers published, publishing original research across many fields. Published articles are reviewed for technical validity but as long as they pass that, it doesn't matter how important the reviewers or editors feel the article is. PLOS ONE doesn't publish many physics articles on the whole, but it has done a few along the lines of the above, for example this article that seeks to rework general relativity - probably equally dubious but one I have less background to assess.
What are we to make of Dr. Kipreos' article? Univeristy of Georgia seemed rather pleased with the publication. It also notes he is a molecular geneticist, not a physicist. Hmmm. Some version of the press release (with no comment from any actual physicists) spread quickly around the internet; the article lists 8522 "views" on the PLOS platform right now. Despite the promise of "post publication peer review", there is only one incomprehensible comment on PLOS ONE itself, and nothing in PubMed Commons or PubPeer.
I did track down two apparently knowledgeable critiques in blog form from physicists Brian Koberlein and Matthew R. Francis. There is no sign of any response or acknowledgement from Dr. Kipreos of the problems they raise.
As Dr. Francis notes, articles "proving" Einstein was wrong are extremely popular among the less-well-informed. I recall when I was a young man, probably about 12 after having read some account of special relativiely probably by Asimov, there was something that occurred to me that I thought I saw clearly, everybody must have overlooked it! I'd be famous, get a Nobel, etc. etc. When I tried explaining to my Dad (a chemist) he rather patiently suggested maybe I needed to study a bit more about it. Sure enough, when I understood it more, my insight had already been long accounted for.
Sadly, there are a few people who never seem to grow out of that stage of certainty that they've discovered something simple that others have missed. Physics journals routinely receive these "crackpot" papers - I've heard something like one in five papers submitted to journals that cover gravitation and relativistic physics are in that category. Their editors can spot these papers a mile away.
A paper recently appeared in the open access peer-reviewed journal PeerJ titled Lung cancer incidence decreases with elevation: evidence for oxygen as an inhaled carcinogen. It makes the hard-to-believe claim that the oxygen we breathe is a cause of lung cancer. This is pretty far from my own expertise. Can I trust this result? Reading the paper I find it seems to have done the sort of statistical tests I would expect, checking for a long list of other possible factors including such things as radon and UV exposure (as well as of course cigarette smoking). Maybe they missed an important factor (cosmic ray exposure? levels of some other carcinogens that vary with elevation?) but the analysis looks pretty reasonable to me.
I know nothing about the authors or their previous publication history. Their institutions are reasonably well-known (University of Pennsylvania, UC San Francisco). In the paper they declare "no funding for this work" which is a little suspicious - most good science is done with some sort of external funding. What about the journal? PeerJ is a new entrant (launched in February 2013), focused on Biological and Medical sciences, with a very different business model from traditional journals. But it claims to be applying rigorous peer review. I have some first-hand knowledge of that as a co-author (among many) on a paper being considered for publication in that journal - we had a pretty thorough first round of review by external referees.
What about external commentary on this oxygen-causes-cancer paper? There was quite wide positive coverage presumably following a university or journal press release, for example this report from EurekAlert!. The only negative response I could find was this one from Fiona Osgun at Cancer Research UK which concludes "this paper is an interesting read certainly, but definitely doesn’t tell us that oxygen causes lung cancer" - her primary complaint seems to be they likely didn't take smoking fully into account due to issues with the years for which data was acquired in the study.
Who to trust here? Unfortunately it's very unclear at the moment. To me this doesn't look like either the journal or authors were "behaving badly" - at worst they might have made some honest mistake that will be understood as this research is followed up on. At best - well, maybe we now understand another source of cancer risk. But maybe I'm way off base - like I said, this isn't a field I'm at all familiar with!
A second recent paper in a similar vein is Solar activity at birth predicted infant survival and women's fertility in historical Norway published in the quite-prestigious 350-year-old Proceedings of the Royal Society B (impact factor 5.683). This is again a statistical study of correlations, this time between solar activity (sunspot counts) at date of birth and various metrics of survival and fertility. Again the authors seem to have accounted for a variety of possibly confounding factors - the science looks reasonable to me (far from an expert in this field). The authors themselves are from the "Norwegian University of Science and Technology" in Trondheim which seems like a respectable place. The journal is clearly among the most well-regarded in the world. There's no sign of bad behavior by either author or journal here.
So what about reactions? The paper received pretty wide-spread media coverage, again essentially parroting a (positive) press release, with occasional quotes from other scientists expressing doubts about the mechanism. The only lengthy critical response I could find was this one by Richard Telford, and his critique of the UV mechanism appears quite devastating. Norwegians have very little UV exposure in the first place, and cloud and regional variability is much more significant than the solar cycle effect. His suggested explanations for the paper (assuming the authors and journal did not behave badly):
The first is that some property of the data causes the statistical methods (which look good) to fail unexpectedly. The second is chance – one of the 5% of results that appear to be statistically significant at the p = 0.05 level when the null hypothesis is true.
There is a third possibility - that some other mechanism than UV variation is mediated by the solar cycle and has an impact on life expectancy etc in Norway (and perhaps elsewhere).
So do I trust this one? Given the strength of the one critique, right now I feel most likely the authors have made some sort of honest mistake. But again I'm way outside my own expertise, it is really hard to know what's right.
In both cases this state of uncertainty isn't bad - it's quite normal for science at the cutting edge to be ambiguous and uncertain. Scientific results that stand the test of time require confirmation by replication of studies like these under different conditions, testing the explanatory hypothesis via all the experimental and theoretical implications one can. When multiple lines of evidence all support the same picture of reality, then one can be fairly certain it is right. When we just have one line of evidence, in cases like this, it's fine and appropriate to be skeptical. Over time the truth should be sorted out.
I thought I ought to share a recent incoming email with the world... I know this is a widespread problem, but the coincidence of this appearing just as I was thinking about Gerlich, Tscheuschner and Monckton made posting this one a little irresistible :)
Invitation to Propose a Special Issue and be the Lead Guest Editor
SciencePG (http://www.sciencepublishinggroup.com) is one of the worldwide publishers who is dedicated to promoting exchange of knowledge and advancing technological innovation. Special Issue is a part of SciencePG and plays an important role of its rapid development. Acquiring that you have once published a paper titled COMMENT ON "FALSIFICATION OF THE ATMOSPHERIC CO2 GREENHOUSE EFFECTS WITHIN THE FRAME OF PHYSICS" on the theme of Greenhouse effect; climate; thermodynamics in INTERNATIONAL JOURNAL OF MODERN PHYSICS B, SciencePG believes you must have great achievements in your research field and sincerely invites you to propose a Special Issue and be its Lead Guest Editor.
No, I'm not talking about how last year was the hottest (globally) on record. It's been almost two years since my last post here, but I'm intending to end the silence. I have actually been doing a lot of reading and listening of one form or another, along with a few comments here and there, especially on twitter. One reason I haven't felt an urgent need to write has been a couple of excellent new entrants in the climate blogosphere:
It's been another two and a half years since I posted about my hybrid car - how has it been doing now that it's getting past 100,000 miles and the original warranty period is coming to an end real soon (8 years on the electrical system)? Here's the same graph extended to another 30,000 miles of driving:
I've noted before:
Loops are at the heart of nonlinearity and the complexity that makes the world interesting, but loops are also dangerous and eagerly avoided by the analyst. Circular reasoning is rightly condemned within pure logic and feedback loops are the bane of every sound system, but loops of self-consistency seem naturally at the heart of how we make sense of the world, and what true understanding is. I think one of Hofstadter's points [in "I Am a Strange Loop"] is that it is only through level-crossing loops that you can break out of simple tautology into true understanding. Perhaps this is at the heart of the "inductive" reasoning that Popper persistently attacks in his "Logic of Scientific Discovery" [...] Economic loops are central to growth; the familiar chicken-and-egg loop is at the heart of life itself.
There's been a bunch of discussion lately about comments on blogs that has made me even more convinced they aren't worth it here - so I've turned off all commenting by anonymous users.
When the APS Forum on Physics and Society publication "Physics and Society" published an article by one Wallace M. Manheimer suggesting with little evidence that renewable energy was useless, I felt obligated to respond with some more complete information. They published my letter in the latest issue; I've reprinted it below.
Wallace M. Manheimer's article  on energy choices in the April 2012 issue makes a number of important points, but also goes wrong on many fronts, and I hope Physics & Society will allow at least some correction of these misstatements.
To start at the end, Manheimer asserts that "one cannot talk about climate and ignore energy supply. Yet, these organizations [AIP and APS] have done just that." One need only read the same issue of Physics and Society to know that claim is false - the book review by Paul P. Craig  mentions "the first APS energy study [...] in 1973", which has been followed by many others. Manheimer himself cites the recent APS "Energy Efficiency Report" - and then appears to dismiss it as parochial. This is ironic since he earlier claims that cutting US energy use would be "worse because distances are much greater in the United States, it is colder here, and we have responsibilities as a major world power" Manheimer's argument pertains to Italy, but in general technology developments allowing efficiency gains in the US apply equally well or better elsewhere.
I had an odd call at work the other day. My cell phone rang and I noticed an unfamiliar number with a Rochester, NY area code.
Rochester: "Hello, is this the bishop?"
Me: "Uh, yes"
I waved the colleague I'd been talking with out of my office and closed the door as you never know when a call like this will involve confidential matters.
Rochester: "I'm really sorry to disturb you but they gave me your number and told me to call."
Me: "Oh, no problem, what's this about?"
It's mid-February 2012 and the various groups reporting global surface temperature data have all posted numbers that are on the low side compared to thus far in the 21st century, though still several tenths of a degree above the 20th century average. NASA's Goddard Institute for Space Studies (GISS) in particular, which I've been following and posting guesses/predictions on for several years now, has just posted a January 2012 number of 0.36 as the global land + sea temperature anomaly, 0.15 degrees C below the 2011 annual average of 0.51 and 0.27 degrees below the 2010 (and 2005) record of 0.63 degrees.
So naturally, those who deny the scientific evidence for human-caused warming have been going on about cooling, how warming has stopped for many years, and so forth. Though not as adamantly as they were back in 2008 - that cooling proved very brief; perhaps they have learned to be more cautious. If I was in mind to go with simple linear trends and didn't believe the science on warming myself, the January 2012 number of 0.36 sounds like a pretty good bet for the whole of 2012 at this point, maybe it should be even lower.
But I understand the science and know quite clearly warming will continue (with some minor ups and downs) until we stop with the fossil carbon burning - and it may take a while after that to actually stop. I've plugged some numbers in and come up with a science-based prediction for 2012: 0.65 C (plus or minus about 0.07). That middle value would break all previous records. Even on the low end it would put 2012 at the 5th or 6th warmest year on record (just slightly cooler than the remarkable early warm year of 1998 in the GISS index). Why am I so sure 2012 will be so warm?
The following is my review of The Hockey Stick and the Climate Wars, by Michael Mann - I read the Kindle edition (and sent him a few corrections for typos here and there).
As I was reading this first-person account of some of the most maddening episodes in modern times, I wondered to myself - what audience is this written for? How will some of the different players and bystanders react? Is Dr. Mann bringing on himself here yet another round of baseless attack from those who side with the most powerful entities human civilization has ever known?
I have no doubt the attacks will continue to intensify. If you hear about this book from some of the people, foundations and corporations that Mann names in it, please remember they have a very strong agenda: they don't want you to read it. If you find yourself sympathizing with one of these powerful entities, that means you need to read it, more than anybody else.
Having recently acquired (through work) an iPad, I've downloaded a lot of free or inexpensive e-books, and made some attempt at reading a few of them on the system. It's not quite like a physical book, but it's not a bad experience. There have been some formatting or editing issues (presumably because the originals were scanned and turned into text via OCR) - but sometimes you see copy-editing errors in a physical book as well. It means it's slightly less ideal than the "real thing", but I don't think I've running into anything that prevents communication of the author's intent.
Among the books are many that I've long had some desire to read, but never got around to looking up in a store or library - it's easier (or perhaps just implies less commitment) to download them than to go out searching. One of these was the original 1935 book by economist John Maynard Keynes, "The General Theory of Employment, Interest and Money", which led to the so-called Keynesian philosophy in US monetary and economic policy in the mid 20th century. Keynes wrote in the context of the Great Depression, which the world was experiencing at that time, and while much of what he says can be hard to understand, it surely has some relevance to the modern "Great Recession" the world has been going through for the last few years. Below I'll quote some parts of the text I found particularly enlightening, without a lot of comment from me on the matter since to say anything knowledgeable on this would require more familiarity with other economic schools of thought than I possess.
2010 and 2011 may not have been unusual in terms of the number of energy-related disasters, but I suspect they have at least been unusual in terms of the quantity of headline-grabbing material and TV news attention, and the ongoing disaster stemming from the earthquake and tsunami in Japan is only the most dramatic of them. With the 1-year anniversaries of the Upper Big Branch and Deepwater Horizon disasters running past along with the 25th anniversary of Chernobyl, I felt a need for some sort of quantitative comparison of these various events...
An explosion or earthquake or other disaster of that sort involves the almost instantaneous release of a large quantity of energy. For earthquakes we have a convenient measure in the Richter scale, which measures the shaking amplitude. The Richter scale increases logarithmically, so that an increase in magnitude by 1 means a shaking amplitude 10 times as large. The quantity of energy involved scales as the 3/2 power of that amplitude, so 2 magnitudes on the Richter scale corresponds to an increase in energy release by a factor of 1000. Converting energy to standard metric notation in terms of joules (1 J = 1 kg m^2/s^2), the Richter scale magnitudes come to:
Magnitude 3: 2 GJ (2x10^9 J)
Magnitude 5: 2 TJ (2x10^12 J)
Magnitude 7: 2 PJ (2x10^15 J)
Magnitude 9: 2 EJ (2x10^18 J)
Nuclear explosions are typically measured in units of kilotons of TNT, where 1 kt TNT = 4.2 TJ, i.e. a 1 kiloton explosion should be about double the energy release of a magnitude-5 earthquake, and a 1 MT (megaton) explosion around double the energy release of a magnitude-7 earthquake.
Also worth thinking about in comparison is the non-explosive use of energy, as it runs through the natural world and as we use it for our own purposes. Since a year consists of just over 3x10^7 seconds, a 1 GW power plant over the course of a year produces 3x10^16 J or 30 PJ of electrical energy. That's about 7 times the energy release of the 1 MT explosion, about 15 times the energy release of a magnitude-7 earthquake. That energy release is spread over tens of millions of seconds, not just the few seconds of an explosion, but it's good to remember it is a large quantity of energy.
Human society currently uses about 15 TW of primary energy, or 450 EJ per year. That's over 200 magnitude-9 earthquakes, almost 1 per day. That's a lot of energy.
Earth receives energy from our Sun at a rate of about 174 PW. In a year that's about 5x10^24 J, 5 YJ (yottajoules) or 5 million EJ. That's a magnitude-9 earthquake worth of energy every 12 seconds! Luckily it's spread out over the whole (day-lit) surface of the Earth, so we don't normally experience the magnitude of that energy flow in any dramatic fashion. Still, it's worth remembering how natural energy scales like this tend to dwarf whatever humans do.
So, how do our recent collection of energy-related explosions and disasters compare?
A few months ago Tamino at Open Mind posted a fascinating analysis of warming obtained by fitting the various observational temperature series to a linear combination of El Nino, volcano, and solar cycle variations (using sun spots as a proxy for the latter), plus an underlying trend, allowing for some offsets in time between the causative series and the temperature. Year to year global average temperatures fluctuate significantly, by several tenths of a degree. Taking into account these "exogenous" factors, however, greatly reduced the level of variation. Not only does this more clearly show the underlying trend, once the "exogenous" components are removed, but it occurred to me this also allows prediction of future temperatures with considerably more confidence than the usual guessing (though I've done well with that in the past), at least for a short period into the future.
See below for a detailed discussion of what I've done with Tamino's model, for the GISS global surface temperature series. In brief, however, I present the results of two slightly different models of the future, first with no El Nino contribution beyond mid-2011, and second with a pure 5-year-cycle El Nino (starting from zero in positive phase) from mid-2011 on.
|Year||Model 1 prediction for GISS Jan-Dec
global average temperature anomaly
|Model 2 prediction|
While there's some variation in future years, the final average temperature for 2011 should be close to 0.58 (similar to the temperatures in 2009). Temperatures in 2012 are likely to be much warmer - at least breaking the record of 0.63 set in 2005 and 2010, possibly (model 2) by as much as 0.15 degrees. With the continued waxing of the solar cycle and continued increases from CO2 effects, however warm 2012 is, 2013 should be even warmer (unless we get a big volcano or another strong La Nina then).
Back when I was a young graduate student our system administrator was a bit of a gamer. We used UNIX: a Digital Equipment VAX running some BSD version, and later SUN workstations - and I pause for a moment in memory of those worthy but now defunct corporations. UNIX at the time came with a bunch of standard command-line-oriented games (and graphical ones later on the SUN's) - which of course the sysadmin was free to delete, but ours didn't. He even installed a new game - Empire (a multi-player "Civilization"-like game) - and started a few games hosted on our computers, soliciting players from around the internet.
For a few months Empire, rather than physics, became my passion. It ran on a schedule such that every 4 hours or 6 hours the clock ticked and you could make more moves of your military units, move commodities from one city to another, or make new plans for your cities. And of course all your opponents did the same. Being there right at the clock tick allowed you to attack first, if that was in the cards, or prepare necessary defenses for an expected attack. And missing a clock tick (for something as useless as sleep, for instance) meant losing tempo in the game; your military units might just sit there rather than move, one of your cities might start to starve, or food or other elements might be wasted because there was no room to store more.
Realizing this wasn't personally sustainable, I delved into the C programming language which seemed to be the standard for UNIX (but up to then I'd hardly used - I'd done some Fortran and assembly programming before). After a few days work I had an automated player program that I could schedule to run shortly after each clock tick to take care of the basics - moving commodities around and moving some of my units along pre-arranged paths that I could update once or twice a day.
This gave me a slight advantage over those players who weren't waking up every 4 or 6 hours at night to update their games, and my game started to do quite well. But not well enough for me; I started to notice some anomalies in the way certain things behaved in the game. If I used ground transport to move a fighter plane from one city to another, the mobility level in the city I moved it from dropped far more than I expected. And if I moved two aircraft from two different cities, both dropped to the same level. There was some bug in the game software, and I needed to track it down.
So I started reading through the source code of the game. This really got me up to speed on programming with the C language - the code had extensive use of pointers and there were arrays of pointers to functions and multiple layers of indirection that had to be traced to figure things out. When I finally got down to the code regarding moving aircraft, I discovered what was going on. The bug was that it was using the mobility of the central capital city as the starting value before subtracting the mobility cost of moving the plane, rather than the mobility of the actual source city. I quickly realized I could exploit the bug - if I kept my capital city mobility high, I could make use of the bug to quickly raise the mobility available in any city by bringing in a fighter plane and moving it around. This gave a huge advantage in the game - mobility was the key factor that limited how much you could do with each tick of the clock.
While perusing the source code I found some other things that looked like bugs too, and verified them in the game. One of the issues was handling of negative numbers. If you loaded a negative quantity of a commodity onto a ship in a harbor city, the code was set up to treat that the same as unloading a positive quantity from the ship to the city. However, while for positive loading the code checked that the city had sufficient quantity of the commodity, for negative loading (unloading) it didn't do that check for the ship. Loading large negative quantities of gold onto a ship gave you a way to create unlimited quantities of gold (or any other commodity the same way).
Finding these bugs that could give such a huge advantage in the game gave me some moral qualms, and I consulted our sysadmin, who was running the game, about what I should do. He asserted that my only responsibility was to file bug reports and suggested fixes with the game developers, and then he'd update the game software when they fixed the problems. As long as the bug was reported, it was perfectly legal (according to standard game rules) to exploit them... So I did...
My obvious and mysterious advantages in the game didn't sit well with the other players, a few of whom knew who I was. I soon found my nation under attack from a united alliance of all the others. With my bag of tricks I was still able to largely prevail, until the nuclear weapons came out...
Not long after this (November 1988) I was working on one of our Sun machines when suddenly everything mysteriously slowed down - the computers were being attacked by the first "internet worm". It turned out I was very close to the epicenter of this event, and one of my colleagues was a good friend of Robert Morris, the student who launched the attack which exploited vulnerabilities in some standard UNIX system services. The era of computer viruses and worms was upon us. Morris was taking advantages of bugs in major computer systems just as I had exploited bugs in the Empire software to gain advantage in that game.
Bugs with destructive power in themselves or available for exploitation by the unscrupulous are almost inevitable consequences of our efforts at automation and removing humans from low-level oversight and decision-making in any system. Even in systems where humans ostensibly make the decisions, if human actions are governed by rigid rules (whether or not they function well under ordinary circumstances) or are taken with incomplete understanding of what they are doing, the extent to which such a system becomes a "machine" with predictable responses almost inevitably invites a quest for "bugs" to exploit for personal advantage. Infamous hacker Kevin Mitnick found social engineering (tricking people into giving him their passwords) at least as effective as anything else in breaking in to computers.
The problem extends far beyond the domain of computer systems. Economic, media, legal and political systems have become highly complex "machines" in modern times, governed by rigid rules and understood by few of those who depend on them. Vital decisions are often made by poorly paid bureaucrats (on regulation enforcement, say) or low-status workers (those mortgage "robo-signers", for instance). The process can be mystifying to the outsider, but to somebody who works to understand it, "bugs" in the system open up enormous (what most would regard as immoral, but often perfectly legal) opportunities for great riches or power.
The 2007-2008 financial crisis is very much a case in point, at least as I understand from my recent reading of Michael Lewis' account, "The Big Short: Inside the Doomsday Machine". A number of people became enormously wealthy while bankrupting their own companies, their customers, or large swathes of the general public. The ways in which they managed this was through exploitation of a handful of real "bugs" in US and international systems of finance. Some of these bugs have been addressed; some I'm less confident will be, exhibiting further bugs in our political and media systems.
Is a sense of shame, embarrassment, fear at being discovered central to what makes us fully human? The old story in Genesis suggests so - (from the King James translation):
And they were both naked, the man and his wife, and were not ashamed. [...] And the serpent said [...] in the day ye eat thereof, then your eyes shall be opened, and ye shall be as gods, knowing good and evil. [...] And the eyes of them both were opened, and they knew that they were naked; and they sewed fig leaves together, and made themselves aprons. [...] And the Lord God called unto Adam, and said unto him, Where art thou? And he said, I heard thy voice in the garden, and I was afraid, because I was naked; and I hid myself.
The first emotion of the couple awakened by knowledge of good and evil was not joy, but shame. To be human is to err, to make mistakes. To do things that are truly embarrassing. It's part of who we are. But it is not just the mistake-making, it's recognizing those mistakes. To actually be embarrassed, to feel shame, to be afraid of the consequences of what we've done. If we routinely do stupid things but don't even recognize that we've done anything wrong, we're living a paradise of ignorance, a pre-fall state of unconscious bliss.
It's become clear to me this is the state which US political and media culture is aiming for. A return to the Garden of Eden, where nobody can do anything wrong, and it is impossible for any public figure to ever feel shame. Where the media has no ability and feels no responsibility to distinguish between right and wrong, truth and falsehood, good and evil - except on those rare occasions where the bad guys are foreigners and everybody on our side can agree.
There are still occasional exceptions. Elliot Spitzer, briefly. You have to give the guy credit for disappearing from public view for a while after that business - and he was our governor too, a really powerful figure. He really, sincerely looked ashamed and embarrassed at the exposure of his sinful behavior. John Edwards is a worse story - and I and a lot of other people feel embarrassed about that one, I was a big supporter of the guy before his mess became public. But at least Edwards quit his campaign and really did look highly ashamed about his own behavior.
There have been a couple of minor congressmen resigning recently for stuff of that nature, but it seems to be pretty few and far between. Why is David Vitter still in the US Senate? John Ensign? Why did that Idaho senator and S. Carolina governor hang on until their terms ended? Yeah, yeah, Bill Clinton too. Though at least you could sort of feel his pain at times.
And those are just the sex scandals. When it comes to the more serious wrongdoing with respect to their actions in office, there seems to be no sense of shame at all, ever. John Murtha completely unrepentant. Charlie Rangel. Jack Abramoff, Tom Delay and friends. Ted Stevens in Alaska. The many scandals of the Bush administration (and Reagan before). Did any of those folks every admit that anything they did was wrong? Did you notice how governor Walker of Wisconsin reacted when the recent prank Koch call became public?
I am sure that reality shows and celebrity obsessions are a contributing factor in our culture of shamelessness. Perhaps that form of popular entertainment is a real source of harm, but I am far more concerned about the harm done by the lack of recognition of truth and falsehood, right and wrong, among our political and media classes. It almost seems brazenness in the face of what to regular people would be highly embarrassing is an asset for politicians.
My evidence that we have now reached the pinnacle of shamelessness in US politics is this markup hearing of the House Energy and Commerce Committee, held on Tuesday March 15th. The subject of the hearing was House Resolution 910, titled the "Energy Tax Prevention Act". The committee, chaired by Fred Upton, makes the claim that their work to stop the EPA from regulating greenhouse gases will reduce gasoline prices.
Politifact examined Upton's claim on this and found it to be False. But at least the claim that regulations cause prices to rise and therefore is a little like a future "tax" (despite not involving taxes at all) has some rationale. So the bill title, while distorting the truth, isn't perhaps a completely implausible and blatant lie.
And one can understand that representatives might vote against something that on its face sounds like a good idea for legitimate reasons, or conversely for something that otherwise sounds like a terrible idea. Though the vote in committee on the final bill was pretty clearly against the facts of the matter, as this editorial in Nature put it:
It is hard to escape the conclusion that the US Congress has entered the intellectual wilderness, a sad state of affairs in a country that has led the world in many scientific arenas for so long. Global warming is a thorny problem, and disagreement about how to deal with it is understandable. It is not always clear how to interpret data or address legitimate questions. Nor is the scientific process, or any given scientist, perfect. But to deny that there is reason to be concerned, given the decades of work by countless scientists, is irresponsible.
Worse than the title and distorted spin from the committee chair are the actual words uttered by some of our duly elected representatives during that hearing. This is not just an intellectual wilderness, this is a moral wilderness where completely provably false statements are just accepted, treated as legitimate points of view, where nobody needs to apologize for being wrong, ever.
Eli Rabett railed on journalists recently, and particularly (the bulk of) science journalists, for practicing "churnalism" - just writing stuff that's almost copy-paste, without even thinking about it, without seeming to put any effort in. Another part of the problem is what Jay Rosen has termed the view from nowhere - the attempt by the establishment press to appear neutral and claim objectivity, when in fact what they are doing is a form of unrecognized ideology in itself (that "both sides are equal", and all that follows).
John Broder has occasionally written some excellent pieces on the interaction between science and politics for the New York Times (though I've just been browsing their archives and haven't found a good example from the last year. Hmm. A large number of the ones I thought good turned out to have required corrections after initial publication. Oops.). In this latest piece however, he provides the perfect exemplification of both Rabett and Rosen's complaints. This has got to be one of the laziest, most egregious false-balance stories I have ever come across in the national press. The central point of Broder's piece is that "both sides" claim to be standing for science. Broder bases this on his claim that:
Democrats rounded up five eminent academic climatologists who defended the scientific consensus that the planet is warming and that human activities like the burning of fossil fuels are largely responsible. [...]
Republicans countered with two scientific witnesses who said that while there was strong evidence of a rise in global surface temperatures, the reasons were murky and any response could have adverse unintended effects.
I watched most of the hearing yesterday, and I was very surprised at Broder's claim that 5 of the witnesses were called by Democrats, and only 2 by Republicans. The Republican congressmen, in questioning, almost universally queried 3, not 2 of the witnesses, looking for favorable responses (some of them also tried to get responses on very loaded questions from Richard Somerville, called by the Democrats). Why would Broder think that only 2 of the witnesses were called by Republicans, when in fact 3 were? As Joe Romm pointed out in this piece, John Christy and Roger Pielke Sr. have been called upon to testify by Republicans before, so those two weren't exactly a surprise. The DDT-advocate was a bit of a surprise, but clearly there for the Republican "side".
In some followup discussion with Barry Bickmore (by email) and Kevin C (at Skeptical Science) it became clear we were missing something in the analysis of Roy Spencer's climate model. Specifically, Bickmore's Figure 10:
differed significantly from Spencer's 20th century fit even though he was ostensibly using the same parameters. If you look at the year 1900 in particular, Bickmore's Figure 10 has a temperature anomaly slightly above zero, while Spencer's is pegged at -0.6 C. Bickmore did use a slightly different PDO index than Spencer for his 1000-year graph (figure 10) - but more importantly, he took Spencer's -0.6 C temperature as the starting point in the year 993 AD, rather than as a constraint on temperature in the year 1900, as it actually was in Spencer's analysis. It turns out that to actually match Spencer's 20th century temperature fit the starting temperature in 993 AD needs to be extraordinarily, far beyond impossibly, low. We'll get to the details shortly.
BYU geologist Barry Bickmore recently reviewed Roy Spencer's recent book, "The Great Global Warming Blunder", finding a number of true "blunders" by the author. In particular he found some very peculiar properties of the simplified physical model that Spencer made a central feature of the book, finding that Spencer's curve-fitting allow infinitely more solutions than the one Spencer somehow settled on, and a number of related issues.
I tangled with Spencer over an earlier model like this which he was promoting more than 3 years ago. What he didn't seem to realize about that first model was that it was essentially trivial, a linear two-box model with two time constants (a subject I explored in detail here a while back). I tried explaining this, but he seems not to have gotten my point that such a model inherently contains no interesting internal dynamics, just relaxation on some (in this case two) time scales. Which seems to go completely against the point I thought he was trying to make, that some sort of internal variability was responsible for decadal climate change.
So it was something of a surprise to me that Spencer based his "Great Blunder" book on an even more simplified version of this model, with just 1 effective time constant. He even tried to get a paper published using this essentially trivial model of Earth's climate. As Bickmore outlined in his part 1, the basic equation Spencer uses is:
(1) dT/dt = (Forcing(t) – Feedback(t))/Cp
where T is the temperature at a given time t, Forcing is a term representing the input of energy into the climate system (there is a standard definition for this in terms of radiation at the "top of the atmosphere") and Feedback is a term that itself depends on temperature as
(2) Feedback(t) = α (T(t) - Te)
with α a linear feedback parameter and Te an equilibrium temperature in the absence of forcing (Bickmore and apparently Spencer don't actually use absolute temperature T and equilibrium value Te, but rather write the equations in terms of the difference Δ T = T - Te, which amounts to the same thing, but obscures an important point we'll return to later).
The final term Cp is the total heat capacity involved. Each of forcing, feedback and heat capacity is potentially a global average, but would normally be expressed as a quantity per unit area, for example per square meter. Since the bulk of Earth's surface heat capacity that would respond to energy flux changes on a time-scale of a few years is embodied in the oceans (about 70% of the surface), Cp should be defined essentially as 0.7 times the heat capacity of water per cubic meter, multiplied by the relevant ocean depth in meters (h):
(3) Cp = 0.7*4.19*10^6 J/(m^3 K) * h = c * h
where c = 2.9 MJ/(m^3 K) (Spencer and Bickmore seem to have forgotten the factor of 0.7, so use a slightly larger value for c, which means their h values are probably smaller than the actual ocean depth such a heat capacity would be associated with).