Steven Mosher: even Fuller of it

[UPDATE - June 24, 2010: the following text has been slightly modified following some discussion at ClimateAudit and in particular a detailed explanation from Steven Mosher of what he did wrong. Changes are indicated by strikethrough for deletions and bold for additions].

When people are obviously wrong it doesn't take much time or effort to dismiss their claims. When Joe Barton apologizes to BP we know he's spouting nonsense. When Gerhard Kramm gets simple integrals and averages confused it doesn't take much effort to convince anybody other than Kramm where he went wrong. When Tom Fuller blusters about quantitative meta analysis, Gish Gallops, and alternate universes you can tell he has trouble with logical coherence.

But the tricky cases are those who are much more subtle in their nonsense. Making stuff up is easy. Making stuff up that on the face of it looks somewhat plausible does take a bit more skill. Figuring out that the "plausible" stuff is just as much nonsense as the obviously wrong takes considerably more work, and some of these actors tend to make a lot of work for those of us trying to defend real science. One of the most skilled in creating plausible nonsense is Christopher Monckton. Prof. John Abrahams is the latest of us to take on Monckton's fabrications, and collectively thousands of hours have surely been spent tracking down the ways in which Monckton has misrepresented science.

Brian Angliss has recently put a lot of effort into tracking down the basis of some of the claims regarding "climategate", in particular looking at the implications of malfeasance on the part of the scientists whose emails were stolen. Many of these the conclusions Angliss examined were claimed at the website ClimateAudit, and in particular in a book published by Steven Mosher and Tom Fuller. There followed an extensive thread of comment including from Fuller and Mosher, and a response from Steve McIntyre at ClimateAudit that clarified some of the claims prompting Angliss to revise his article to attempt to correct his own mistakes.

The first discussion point in Angliss' review of the claims and in the ClimateAudit back and forth with Mosher and Fuller is the meaning of the "trick" to "hide the decline" phrase found in the stolen emails. This has been adversely interpreted in a couple of different ways but the actual meaning has been clearly identified as the process of creating graphs that do include tree-ring-based temperature "proxy" data only up to 1960, or 1980, a point where they start to diverge from temperatures measured by instrumental thermometers. There is nothing scientifically nefarious or "wrong" about this - the "divergence problem" has been extensively discussed in the scientific literature including in the text of the most recent IPCC report. If you have reason to believe a particular collection of tree ring data is a good measure of temperature before 1960 but for some still uncertain reason not after that point, then it's perfectly legitimate to create a graph using the data you think is reliable, particularly if these choices are all clearly explained in the surrounding text or caption.

Figure 2.21 from IPCC TAR WG1

Figure 6.10b from IPCC AR4 WG1

What's definitely not legitimate is presenting a graph that is specifically stated to be showing one thing, but actually showing another. That might happen just by accident if somebody messed up in creating the graph. But the ClimateAudit discussion and Mosher/Fuller book appeared to claim that in one figure in the 3rd IPCC report (TAR WG1 figure 2.21, 2001) and in one figure in the 4th report (AR4 figure 6.10b, 2007) there was a real instance where "the scientists had actually substituted or replaced the tree ring proxy data with instrument data" deliberately, for the purpose of "hiding the decline". As Angliss cited, McIntyre definitely uses the word "substitution" (but Angliss was apparently wrong that McIntyre did this in the IPCC context), and Fuller highlighted a portion of the Mosher/Fuller book using the word "replaced". McIntyre later clarified that his claim was not related to these IPCC figures but rather something else. However, Steven Mosher in comment #7 on Brian's article at June 8, 2010 at 12:34 pm stated very clearly that he knew what the trick was and that this substitution/replacement was used for the IPCC figures:

you wrote:

"Looking closely at the graph shows that the tree ring data was neither replaced nor substituted. The zoomed-in version of IPCC TAR WG1 Figure 2.21 at right shows that the instrument data starts around 1900 (red line, red arrow added) while the tree ring data ends at around 1960 (green line, green arrow added). If the tree ring data after 1960 were simply substituted or replaced as McIntyre and Fuller claim, then the instrument data would have been appended to the end of the tree ring data or the instrument data would be shown in green in order to maximize the potential for misinterpretation. Neither is the case."

The TAR is the third Report. We are talking about the FAR. figure 6.10. But I can make the same point with the TAR was with the FAR. You clearly don’t know how the trick works. Let me explain. The tree ring data POST 1960 is truncated. That is step 1. That step is covered in the text of chapter 6 ( more on that later ) The next step is to SMOOTH the data for the graphical presentation. The smoothing algorithm is a 30 year smooth. Whats that mean? For example,
if you have data from year 1 to year 100, your first data point is year 15. Its value is the combination of the 15 PRIOR YEARS and the 15 Following years ( for illustration only to give you an idea how centered filters work) your LAST year is year 85. This year is the combination of the prior 15 years of the record and the last 15 years. year 86 has no value because there are not 15 following years. So with a record that goes to 1960 your SMOOTH with a 30 year window should only display up to 1945. The problem of end point padding ( what do you draw from year 1945-1960) has extensive literature. So for example, there is extending the means of adjacent values at both ends of the smooth. ( the proceedure used in Ar4 ch06) In the case of Briffa’s curve, this procedure was not used. It was used for all the other curves, but in Briffa’s case it was not used. To fill out the filter, to supply data for 1945-1960, the INSTRUMENT SERIES was used.
This has been confirmed by replication. So still, after all this time people do not understand the trick because they have not attended to the math.

1. the series is truncated at 1960.
2. a smoothing filter ( typically 30 years) is applied.
3. To compute the final years of the smooth ( half the filter width) the temperature series is used.

That procedure is the trick. in a nutshell. If you want directions read Jones’ mail.

So Steven Mosher here claims that the "trick" was to use the instrumental data for "end point padding" in the 1960-truncated Briffa (2001) series used in IPCC AR4 Figure 6.10b (and presumably in the similar series in the TAR figure 2.21 Brian Angliss looked at). So that, despite claims to the contrary, in the IPCC reports Mosher claims they really did substitute/replace tree ring with instrumental data. And in a way that was concealed to the public - in particular, the caption of figure 6.10b specifically states what the end-point padding was:

“All series have been smoothed with a Gaussian-weighted filter to remove fluctuations on time scales less than 30 years; smoothed values are obtained up to both ends of each record by extending the records with the mean of the adjacent existing values.”

Similarly in TAR figure 2.21 the end-point padding is stated as:

“All series were smoothed with a 40-year Hamming-weights lowpass filter, with boundary constraints imposed by padding the series with its mean values during the first and last 25 years.”

Mosher is claiming a very specific procedure was used for smoothing that differs from that stated in these figure captions. I asked what the basis was for this claim, but no particular email from the scientists emerged to explicitly support Mosher's claim, and the closest thing to any analysis of the problem were pointers to this thread at ClimateAudit where, if the above endpoint padding procedure was examined, it's certainly not clear from the discussion.

One of the commenters pointed to the difference between the Briffa 2001 curve in the AR4 Figure 6.10b figure and in this NCDC page on the reconstructions:

Briffa 2001 reconstruction with others from NCDC

And indeed you see the Briffa curve (light blue) drops down a bit precipitously in the NCDC figure close to its endpoint in 1960, while the IPCC AR4 figure doesn't drop nearly so far - here's a closeup of the IPCC version:

Figure 6.10b from IPCC AR4 WG1 - closeup on the endpoints

So why are these different? While the differences in the individual curves common to both figures seem rather minor visually, there definitely looks like a problem with handling of smoothing near the Briffa 2001 end point in 1960. But note that the NCDC figure actually doesn't specifically state what endpoint padding was used for its graphs - it only says "All series have been smoothed with a 50-year Gaussian-weighted filter". Perhaps Mosher is right, that the NCDC figure uses the nearby-mean endpoint padding that the IPCC figure claimed to use, while the IPCC figure uses the instrumental data for padding, contrary to its specific claim about padding with the mean? If Mosher is right, that means the scientists really did conceal what they were doing here, and the figure caption for figure 6.10b (and presumably for the TAR figure as well) was a lie.

A commenter (Glen Raphael, #112) at Angliss' post thought proof of Mosher's point was that nobody had debunked it yet:

Perhaps an even stronger bit of evidence is that we haven’t seen Mosher’s claims “debunked” by any of the usual suspects. If his account were incorrect and there were some innocuous alternative way to generate the same graphics, don’t you think we’d have heard about it by now? Wouldn’t a rebuttal have shown up in gloating posts or comments at Deltoid, DC, RealClimate, Tamino, or all of the above? I think it’s safe to say *if* these claims were false they’d be easy for somebody with access to the data to debunk and it’s also safe to say that if they *could* be debunked that would have been done

Well, maybe nobody has actually seen Mosher state what he's talking about so clearly before. But now that he has, yes, it should be easy to debunk. Let's take a look.

The raw data for the NCDC graph is available for download via the page linked above. And here's what it looks like:

Raw (unsmoothed) NCDC data (R code here)

Now let's apply a Gaussian smoothing filter with nearby-mean endpoint padding as both the NCDC and IPCC AR4 figures claimed - though I don't know exactly the parameters or equation they used, I found a functional form that seems to roughly reproduce the main features of the curves in those figures:

NCDC data with Gaussian smooth (10-year) + padding with mean of first/last 15 years (R code here)

The Briffa 2001 curve (blue, ending in 1960) looks remarkably like the curve in the IPCC AR4 Figure 6.10b above. Let's look at close-ups near the end-point:

Magnified view of smoothed NCDC curve, and IPCC AR4 Fig 6.10b (R code here)

The Briffa 2001 curve is blue on the left (from NCDC data) and light blue on the right (IPCC). While the right-hand light blue curve is a little hard to see under all the others, it seems to peak at very close to zero, slope down and then tail off to flat right around -0.15 in both cases. Comparing to the Briffa 2000 curve (green on the left, darker green on the right) the Briffa 2001 endpoint is just a little above the bottom of the valley of the Briffa 2000 curve in both cases. I.e. a pretty good match.

But then, what explains the NCDC figure which had the Briffa 2001 data definitely heading rapidly downwards at the end? As I mentioned, the NCDC page doesn't say how endpoints were handled in that figure, so we'll need to do a bit of guessing.

The first natural possibility is that the endpoints were padded with the very last (or first) point in the series - rather than taking the mean of 15 or 25 points near the end, using the very last point alone. Doing that with the raw NCDC data gives this figure:

NCDC data with Gaussian smooth (10-year) + padding with first/last year data (R code here)

Oops! That looks even less like the NCDC graph, and not like the IPCC graph either - here's a close-up on the end:

Magnified view of smoothed NCDC curve with endpoint padding, compared with nearby mean padding (R code here)

The Briffa 2001 data now has a new valley at about -0.1 and then curves up - the reason for this is that the 1960 endpoint has a value of 0.076, much higher than the typical (negative) values in earlier years. So that one endpoint pulls the whole curve up when you pad and smooth in this fashion.

So clearly the NCDC graph didn't use the IPCC padding or simple endpoint padding for the Briffa 2001 data. One likely alternative was that they used the original Briffa 2001 data extended beyond 1960 for padding, despite the fact that the scientists had concluded it was no longer a valid temperature proxy after 1960. The NCDC data doesn't include the post-1960 Briffa 2001 data, and I couldn't find it in a quick search, but looking elsewhere it appears the data quickly falls to somewhere around -0.4 degrees temperature anomaly. So I took the simple expedient of padding the Briffa 2001 curve with -0.4's:

NCDC data with Gaussian smooth (10-year) + padding (Briffa 2001 only) with -0.4 (R code here)

and here's a comparison of the endpoint region:

Magnified view of smoothed NCDC curve with -0.4 padding (for Briffa 2001), compared with nearby mean padding (R code here)

This indeed looks very much like the original NCDC figure. So the unstated padding in that figure that brought the Briffa 2001 curve down so much was very likely use of the original Briffa 2001 curve beyond 1960, while chopping off the smoothed curve in 1960. That's perhaps justifiable, but a little inconsistent with the statements concerning the source of that data.

So it's pretty clear that the difference in endpoint smoothing between the NCDC and IPCC figures does not require grafting the instrumental data onto the tree-ring data as Steven Mosher claimed. But what do the graphs look like if you do that grafting?

NCDC data with Gaussian smooth (10-year) + padding (Briffa 2001 only) with instrumental data (R code here)

and comparing the endpoints region again:

Magnified view of smoothed NCDC curve with instrumental padding (for Briffa 2001), compared with nearby mean padding (R code here)

While the difference is small, you can see that the instrumental-padded curve flattens out more quickly than the mean-padded curve, and never goes much below -0.1 in temperature anomaly, while the mean-padded curve does go lower. Only the mean-padded curve matches the behavior of the IPCC AR4 WG1 Figure 6.10b illustrated above.

So, conclusively, despite Mosher's claims of certainty on what the scientists did, he was wrong.

But it sure takes a lot of effort to prove that one claim wrong, doesn't it?


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

"I tried using the data

"I tried using the data Arthur links to above with a 40-year Hamming window, and the result does not match in shape or position. My chart matches McIntyre's very well."

Spence, did you process Briffa with the full series or truncating at 1960 before smoothing?

Layman Lurker - sorry if I

Layman Lurker - sorry if I was not clear, but I was not attempting to replicate the end-points, merely attempting to check whether the window was applied correctly (by checking the shape of the curve at least one filter width away from the end points). It seems likely to me that the TAR graphic is not smoothed with a 40-point Hamming-weights filter as described in the text.

I see. Thanks for

I see. Thanks for clarifying.

Hmm, now that I think of it,

Hmm, now that I think of it, the graphs are supposedly of temperature anomaly relative to the 1961-1990 period, so a series that ends in 1960 has a fundamental baselining problem; there's no data for 1961-1990! Even the ones ended in 1980 would have a bit of that problem. Was that baseline issue for series that don't cover the full 1961-1990 period ever addressed in any of the IPCC reports or commentary???

I suppose the baselining is

I suppose the baselining is automatically handled by the calibration process. If the instrumental data used in the calibration is baselined to 1961-1990, then so will the proxy temperatures be. Even if the values actually used in calibration are from before 1960.

That would justify the

That would justify the baselines, yes. But then you should never be free to change the baseline, as DeepClimate is saying was done. Or at least, if you change it then that's really a recalibration of some sort, not just a shift of the data. Anyway, it looks like there was nothing odd about the AR4 data, the real question remaining is what happened in the 2001 TAR.

It depends. You could easily

It depends. You could easily change the baseline from, say, 1951-1980 to 1961-1990 by just applying the difference between these from the instrumental period. This is a 'legal' re-baselining, not a recalibration (for which you would need real data).

I don't know what the re-baselining and re-scaling (?) is that DC refers to.

Martin, Arthur Perhaps the

Martin, Arthur

Perhaps the terms are not quite correct. But what appears to have been done is something along the lines of:

"This directory has all the series, aligned as I described to have a 1961-90 base climatology (or in the case of your series, a pseudo 1961-90 base climatology achieved by actually matching the mean of your series and the instrumental record over the interval 1931-60 ...)."

(Email from Mann to Briffa, Jones and Folland on Sep 23, 1999)

In a later email to Osborn, Mann says:

"As for decisions about the most appropriate baseline period to use for the series, that is as you point out an important issue and one we have to consider with some circumspection, especially if a "modern" calibration (e.g., 1931-1960) to the instrumental record gives a substantially different alignment from the more 19th century-oriented calibration you describe."

In fact, the alignment, if you compare TAR 2.21 Briffa recon to the equivalent in Briffa et al, is different in TAR 2.21, which represents 10% or so of the peak-trough range of the series.

Again, I will discuss all of this later this week (it will probably be my next post).

Whatever was done, I believe was done as a good faith attempt to make comparable various series that were calibrated against different temperature records and different periods. Admittedly it could have been (and should have been) documented better.

But the bottom line is this: I can't see how the claim that Mann deleted the post-1960 portion of the Briffa series and then "padded" it with post-1960 instrumental values can possibly hold. And by the way, McIntyre has also made that very claim.

DC, thanks, that makes a lot

DC, thanks, that makes a lot clear. Yes, this looks good-faith and best-effort, though I would be hesitant of such a manual 'post-calibration'. Perhaps Mike would agree with me now. But there clearly are no simple answers.

Yeah, nobody expects the inquisition.

So, no re-scaling?

Above I said: "In fact, the

Above I said:
"In fact, the alignment, if you compare TAR 2.21 Briffa recon to the equivalent in Briffa et al, is different in TAR 2.21, which represents 10% or so of the peak-trough range of the series."

Clarification: the offset is about 0.06C in TAR 2.21; that is, the whole series is displaced upward by that amount, relative to the equivalent Briffa series as rendered in Briffa et al 2001.

However, although the emails I quoted above does seem to point toward some kind of "recalibration", it turns out this was not done, in fact. There was no ad hoc "post-calibration" and the explanation of the difference lies elsewhere. To be continued ...

DC, the Mann clarification at


the Mann clarification at RC you refer to is here:

Ah, that explains a lot.

Ah, that explains a lot. Here's a closeup on the endpoint difference in the graph Mann linked there:

The scale on the right is temperature anomaly in increments of 0.2, so Mann's choice there made a difference of about 0.1 degree to the final ending point of his curve. But I would note a couple of things:

* I'm guessing Mosher based his conclusions on this statement, but aside from not verifying whether Mann's statement applied to the AR4 (or any) Briffa curve before so assertively claiming it, he even got the actual technique wrong. Rather than padding with the instrumental record, Mann was "padding with the mean of the subsequent data (taken from the instrumental record)" - i.e. a single repeated number (the mean), not the detailed record. How to pick that single repeated number for padding is certainly an issue, but picking a single number is certainly more justifiable than joining the two records together. That's what Mosher claimed, and even by Mann's statement there, that was never done.

* Even in Mann 99 there was no lying or misrepresentation of what they had done, rather omission of the detail of choosing that one padding number: "there is some ambiguity regarding the smoothed curves used to indicate the long-term variations in the record, as the boundary conditions have not always been stated"

As I noted above, I think

As I noted above, I think Mosher's confusion is basically the same as Angliss' confusion (and the reason Angliss had to redact large sections of his analysis).

There are many different versions of "the trick" - ranging from the original hockey stick papers, the WMO statement, the TAR graph, and each of these "tricks" involves a slightly different process (even where it was intended that the trick be consistently applied, it was not).

This has resulted in a huge amount of confusion and talking at cross purposes on both sides.

A simple, honest presentation of the data in the first place would have avoided all of this...

Spence what data source are

Spence what data source are you using for Briffa's TAR graph?

>In some earlier work though (Mann et al, 1999), the boundary condition for the smoothed curve (at 1980) was determined by padding with the mean of the subsequent data (taken from the instrumental record).

Michael Mann's statement, further reason to suspect that the TAR graph is done the way Mosher suggests.
I don't think this statement is what led Mosher astray, but rather the original ClimateAudit post reaches this conclusion with TAR, MBH98 and MBH99. Thomas Fuller has pledged a response on his site.

MikeN, I am just using the


I am just using the data linked to by Arthur Smith in the main post. I haven't looked any deeper than that.

Steve McIntyre has a post providing his point of view on ClimateAudit now (which I'm sure you're all aware of!)

I would think that in MBH99

I would think that in MBH99 and friends, their main error was a lack of paranoia. Colleagues trusted each other to make sensible choices, well knowing that it actually didn't matter much for the big picture. This was over a decade ago... the kind of hostile exegesis this is now subjected to just couldn't be imagined back then.

"Their main error was a lack

"Their main error was a lack of paranoia"

In my branch of science, when you plot a chart, it is simply expected that the chart will reflect what you describe it as in the text. We do this not because we are paranoid, not because we "trust each other to make sensible choices", but because it is just good scientific practice.

Odd that you seem to be under the impression that in climate science paranoia is a prerequisite to performing analysis correctly and as described in texts. I have a slightly higher opinion of most climate scientists than you present here.

>Rather than padding with the

>Rather than padding with the instrumental record, Mann was "padding with the mean of the subsequent data (taken from the instrumental record)" - i.e. a single repeated number (the mean), not the detailed record.

This is certainly replacing actual data with instrumental data.

Deep, "One last point. It's


"One last point. It's true that Briffa's own truncated smooth did use post-1960 points. To me that was probably incorrect - especially since the archive only goes up to 1960. If post-1960 is considered spurious and wasn't even used in calibration, then it probably should not have been used for padding either. Be that as it may, when Osborn sent the series to Mann in October 1999, he recommended truncating at 1960. That series didn't make it into the First Draft in any case as it was too late, and there was no backing reference.

The second time Osborn sent the series, following Mann's request for final versions for the Second Draft, he sent the same series *but this time explicitly truncated at 1960*. So smoothing with post-1960 values was an option that was not even available to Mann. This latter point is detailed here:

I'll be looking at all of these issues (as well as touching on what McIntyre himself has said on the subject of Briffa end point smoothing) in a future Deep Climate post."

1. if anyone expects to understand divergence someday ( explain the data ) doesnt that logically require the data to be archived?

2. If Briffa does not archive data after 1960, how can any future researcher ever hope to test a hypothesis regarding it?
for example. Recently I believe Esper wrote a paper where he explained some diveregence by looking at the uncertainties
in the temperature data and uncertainties in the standardization approaches. Ie the diveregence was explained..
But if Briffa does no archive that post 1960 data then any effort to explain it is quite impossible.

Do you think best practices would require all of the raw data to be archived. especially when the post diveregence data would be needed to comprehensively test theories about divergence. Or was briffa fully justified in archiving only the data up to 1960?

People should try reading all

People should try reading all the relevant threads before posting. UC has not contradicted anything written here. This post evaluated whether Briffa's chart was smoothed with instrumental values and then truncated in AR4 and TAR as Steve Mosher claimed. UC found a match with TAR, while this post rejected with regards to AR4.

UC and the CA post spend more

UC and the CA post spend more time deciding that Mann's curves were smoothed with instrumental temperatures. This is Mike's Nature trick, which is really Mike's GRL trick, and also not covered by this post, and not alleged by Steven Mosher at S&R.

Briffa 2000 vs Briffa

Briffa 2000 vs Briffa 2001

Briffa 2000 was cited in TAR, because that was all that was available by the deadline for second draft. It's true that this paper did not show a truncated version of the series. However, Briffa consistently has used the truncated version of low-frequency MXD reconstructions in *all* spaghetti-style comparisons to other results, going back to 1999.

So the Briffa 2000 of the

So the Briffa 2000 of the NCDC graph (essentially the Briffa 2001 paper) should be the same data as used in the TAR? But in AR4 the Briffa 2000 curve mentions something about being reanalyzed in 2004 (?) Have you looked into whether they are really the same thing or not? Looking forward to seeing your posts on this (and the Wegman/1990 IPCC curve too!)

The TAR data sent by Osborn

The TAR data sent by Osborn (per the climategate email) and NCDC archive match, IIRC.

I haven't looked at AR4.

I should also mention that

I should also mention that there seems to be more smoothing in the TAR version (probably wider effective window). But the main differences are the rescaling/rebaselining and different end point treatment.

I think the problem here is

I think the problem here is that there are some bloggers who have made up their minds that the conclusion must be wrong.

The hockey stick apparently shows up in non-tree proxies and with multiple statistical techniques, so it's scientifically robust and that's where the science is at the moment, but there are bloggers who cannot accept that.

At this point in time the hockey stick argument on blogs has turned into an attempt to justify all those years of close parsing by turning up some kind of "smoking gun". But the science has been replicated independently so it's too late.

Real science 1 blog science 0

Tony, you got it. Replication

Tony, you got it. Replication is key. Scientifically, "auditing" the individual papers is about as useful as running them through a spell checker :-)

"But it sure takes a lot of

"But it sure takes a lot of effort to prove that one claim wrong, doesn't it?"

The problem, of course, is the misplaced burden of proof. No one should have to prove Mosher wrong -- the burden should be on him to demonstrate his claims. And so it would be if he were acting under the umbrella of science and the scientific method rather than engaging in a political, polemic realm.

Even in the legal realm the

Even in the legal realm the claim of dishonesty is a serious one and he who makes the claim should prove it.

Martin, this reminds me of

Martin, this reminds me of the first time I discussed science on Usenet. At that time some highly fashionable critiques of science were going the rounds, referring to the minutiae of Millikan's notes on his oil drop experiment and the failings of Eddington's expedition that observed the total eclipse in 1919, which was used to verify that the gravitational effect on light was as predicted by General Relativity.

Of course one can pick any single presentation of scientific evidence to pieces like this, and that is certainly a worthwhile thing to do because it enables us to refine our techniques. What seems to be happening here, however, is that the failings of an old, outdated paper, the first attempt, in fact, to produce such a hemispheric reconstruction, are being used as if they constituted a meaningful critique of the science as it now stands.

My Usenet correspondents back in the 1990s had a similar mission: to compromise the credibility of the scientific method and promote their own pseudoscience.

One more loose end ... There

One more loose end ...

There is a discrepancy in the Briffa et al 2001, as seen on the NCDC site.

The legend and the data file refers to Briffa 2000 (green curve).

However, the text refers to:
"...three northern Eurasian tree ring width chronologies for 1000-1987 (green) from Briffa and Osborn (1999)"

The text is correct. For one thing Briffa 2000/Briffa et al 2001 reconstructions are for a much larger network, but only back to 1400 as seen in the chart.

Not sure if the auditors/inquisitors have noticed that.

Yea seeing that made me stop

Yea seeing that made me stop trying to audit until I could figure out what other auditors had done.

Steve McIntyre has a reply,

Steve McIntyre has a reply, apparently his comment here was filtered out somehow

I never saw a comment from

I never saw a comment from Steve McIntyre - but check the "Policy" link above, I do get a lot of spam and have to wade through that from time to time, so I might have missed something.

In any case, McIntyre is entirely wrong in his recent post referencing this one when he states that my comment about "legitimate" graphs was in reference to something at ClimateAudit. It was specifically talking about IPCC graphs, no more, no less. This post is an analysis of a claim made by Steve Mosher about IPCC AR4 Fig 6.10. McIntyre's summary of the facts agrees with mine, and he agrees Mosher was wrong. Despite his incorrect claims about my comments and intentions here, we agree on the basic facts of AR4 Fig 6.10 and Mosher being wrong.

I hope McIntyre will correct his main post though - it is really needlessly antagonistic towards me, I said nothing about McIntyre being wrong about anything in my comments here (until this one).

[Note, lightly edited to clarify where McIntyre actually *was* wrong on something - namely, me.]

I presume that any valid

I presume that any valid critique of this post will eventually appear here. I don't want to wade into a quite different thread that is freighted with unnecessarily acrimonious personal issues. I'm also tending towards the view that we're getting a little too "inside baseball" in any case, and perhaps the issue should be set aside until such time as a consistent, considered and defensible line of reasoning emerges to challenge the hockey stick. I won't be holding my breath, but I'll keep an eye out. We'd need some new findings to emerge in paleoclimatology to overturn the existing evidence, but that isn't impossible. It could not, however, emerge solely from this practice of "auditing". Actual scientific research would have to be done.

So AR4 is cut off at 1960,

So AR4 is cut off at 1960, and then padded with the data from prior to 1960. do you get a good match with this?

That leaves Mosher's claims about TAR and WMO. He got the WMO claim wrong, only in that it was not cut off at 1960, but it doe have temperature appended and smoothed.

so now TAR remains outstanding, with ClimateAudit thinking it is temperature data that is used.

Forget about TAR 2.21, given

Forget about TAR 2.21, given Mann's admission at RealClimate, doesn't that make the IPCC Report's Summary for Policymakers into question with it's hockey stick graph?

If you're talking about the

If you're talking about the comment I read, Mann said nothing about the Briffa curve, it was about one of the early MBH curves. But I'm not clear on whether he was applying the comment to the version of MBH included in the TAR graph; UC looked at that and seemed to think it was. I'd like to check or at least see a clear graphic on the differences before proclaiming one way or another.

Yea, one of the early MBH

Yea, one of the early MBH curves, not Briffa. ClimateAudit describes it for GRL(MBH99), while Jones refers to it as Mike's Nature trick, which is MBH98.

Mann is the editor in charge of these TAR graphs, and the filtering follows his preferences, just as AR4 followed Briffa's. This adds more weight to the idea that the TAR graph is tricked as Mosher describes.

Ummm Where can I read,


Where can I read, please, Mr. Mann's description of what he actually did to the data? All I see here are guesses about what he may or may not have done. Why did he not simply explain his methid from the outset? Has the scientific method changed since I were a lad?

When I started learning science nearly 50 years ago, I was used to writing up my experiment with a bit about purpose, a bit about apparatus (maybe even with a diagram drawn with one of those exciting templates), a piece about the exact method I used, then some stuff about observations, an analysis of my results and usually a conclusion....even if it was 'the experiment did not show what I set out to demonstrate'

This simple way of writing up a lab book was good enough for many of the great scientists of the past and got me through an Oxford Chemistry degree with a reasonable class.

But climate science seems to say: 'We poked about in a back cupboard for a bit of data, found some that looked useful, did some clever sums (exact details not disclosed) on it and came up with some pretty graphs that show what we wanted to prove anyway. And we're not going to tell you what we did in case you find out something wrong with it'.

Which is all a world away from the basic methods I learnt at Farnborough GS. No wonder I have a pretty sceptical view of almost anything that comes from the statistical manipulations of climate science.

Where have you looked? I

Where have you looked? I haven't looked at "Mr. Mann's" data, so I don't know how well he's described it, but given the ones I have looked at have been very well described, I wonder if you have looked very carefully yourself.

What I did here was look at the IPCC AR4 WG1 graphs that had been claimed to be wrong. It turns out they were described correctly in the caption to the figure. Apparently all the data is available somewhere these days - I only spent a couple of days on this case, and all I needed was readily available. As far as I can tell, climate scientists have been extremely open about all their work in recent years.

Working science is quite different from college "lab book" science. Firstly it's far more complex, and most of the time you're working on "unknown unknowns" - things nobody has done before that you don't know how will turn out. Also, I suspect the transition from "lab books" to electronic media over the past 40 years or so was at first a detriment to good record-keeping. When instead of keeping notes and data in one hand-written book you now have data across dozens of different computer storage systems, well, that makes things a little trickier to keep track of. The scientists do seem to be getting better at doing that though now.

Your characterization of what "climate science seems to say" is very different from what I have found in the IPCC reports. I suggest you go read them yourself to get some good perspective - IPCC AR4 WG1 - and every claim in there is referenced to the peer reviewed literature which explains in further detail where everything came from. Nothing hidden away that I'm aware of.

Latimer, the particular chart

Latimer, the particular chart looked at here is Briffa's, not Mann's. The TAR graph is Mann's adjustment of Briffa's data.

There is a post on RealClimate linked above, where Mann admits to what he did with the hockey stick papers, adding in instrumental data at the end.

I don't think Arthur Smith's characterization of Mann's openness is valid.

Thanks for the update about

Thanks for the update about 'working science'. It obviously has changed since I got my MSc when clarity of methods was paramount. Otherwise the examining professors would rip you to shreds in the viva examination. Luckily my thesis somehow passed this hurdle, even though I had to conclude at the end that the work did not demonstrate the effect that I had set out to show. Theory and experiment not in line, so we chucked out the theory. This is a sound principle in anything aspiring to be treated as a science.

But despite your protestations of such clarity in climate science, I note that neither you, nor anyone else in this entire thread has been able to point to a straightforward explanation in the published literature of exactly how the graphs in question were originally generated.

I guess I could go and try to read every paper on the subject and see what little nuggets I can glean. But I have neither the time or inclination to do so. Nor in my professional career do I try to or expect to become an expert in every aspect of a wide ranging set of topics. But I do have to be able to make some reasonably good assessments as to the quality of the work that others who are such experts do, and to make decisions based on these factors among many others.

And quite frankly what little I have read and researched myself about climate science does not fill me with confidence. The lack of rigour and clarity immediately makes me very suspicious. That we are still having the discussion above over a decade on only reinforces my view.

You can disabuse me of part of this notion by showing me (paper, page, para, ref etc) Mr Mann's published description of the method he used in the matter under discussion. If this can not be done, then I will have to draw my won conclusions.

Why do I have to show you

Why do I have to show you anything? Sorry, I'm just a plebe like you in this. Completely new to it all. You have a background in science, you can look stuff up as well as I can. I was able to reproduce the NCDC graphs except for end-point smoothing since the data was right there - but the smoothing wasn't specified, so that took some looking into. I'm sure it'll be the same deal for whichever of Mann's graphs you want to look at.

But think about what you're asking for a minute. You wrote a thesis on something. Suppose now, however many years later, suddenly the work you did becomes significant. There seems to be a difference between some property of the world you investigated in 1960 and what people see now. Scientists now need to go back to your data and examine in detail the reasoning and source behind ever number and figure in your thesis.

Could you respond to a request for that data and information? Do you have it available? If so, that's wonderful. If not, think about why in your case it isn't possible and yet you demand it of others...

'But think about what you're

'But think about what you're asking for a minute. You wrote a thesis on something. Suppose now, however many years later, suddenly the work you did becomes significant. There seems to be a difference between some property of the world you investigated in 1960 and what people see now. Scientists now need to go back to your data and examine in detail the reasoning and source behind ever number and figure in your thesis.

Could you respond to a request for that data and information? Do you have it available? If so, that's wonderful. If not, think about why in your case it isn't possible and yet you demand it of others...'

Your analogy is not at all valid. In my case, the work gave a negative result about a hypothesis. (if it had shown a positive I might have chosen an academic career rather than to specialise in IT and technical management) Like Mann's piece, it was largely based on primitive computer modelling, but unlike Mann, all the data and code used are laid out in the thesis for anyone to see and reproduce. It may be bad code, the workings may have been dreadful, but at least they are available for others to see - in all their faded glory.

I do not know when (or in which fields) it became accepted scientific practice to keep such details deliberately obscure, but I cannot view it as anything other than a retrograde step for transparency, accuracy and the advancement of science by vigorous challenge and debate.

And Mann has been an active leading participant in his field ever since he published his first 'hockey stick' paper 12 years ago. It was the paper that made his name..his 'magnum opus' if you will. It did not lie unread in a University library for 30 years until stumbled upon by a researcher. Instead it was immediately seized upon by the advocates of CAGW as proof positive of their case. It catapulted Mann to front and centre of the discussion..with all the kudos, status and 'respect' that come with it. If Mann does not know (or cannot remember) the exact details of how he carried out this seminal work, then he should either come clean and say 'I forgot' or withdraw the paper. He has had 12 years to recover his memory. I have had a long career in recognising technical bullshit, and this has that unhappy odour lingering about it.

But even as Joe Sixpack with a science degree, I would expect to see that all work purporting to show that we are headed to hell in a handcart..and therefore we have to make huge changes to nearly every human system across the globe, should have been carried out to the highest possible standard..and audited to the same. That was how I got interested in the subject initially..I wondered just how true it was that 'the Science is Settled', which we heard so much pre-Copenhagen and pre-Climategate.

That such debates as you have initiated even exist today merely confirm my view that the science is a long way from settled. That a crucial piece of methodology remains obscure and unexplained is a sad reflection of the apparent fact that when it has been done at all it has been done sloppily.

Well, that claim that you put

Well, that claim that you put "all the data and code" out there for all, if true, is certainly commendable. Prove it, point me to an online copy. Or are you trying to hide something? It's not really available for all if it's not online for free.

See how easy this game is?

All the data and code for at least some of the modern temperature reconstructions *is publicly available* by the definition of free, online. For instrumental series, look at what Zeke Hausfather has been doing over at Lucia's blog: reproducing the GISS temperature reconstruciton from scratch with his own code, rather than using the publicly available GISTemp code, to prove whether or not the GISTemp code actually does what it has been said to do. The results are pretty clear - GISTemp is not fiddled with to produce a rising trend, the numbers are accurate.

As to "catapulting Mann", his work was important and received recognition, but he's only number 211 on Jim Prall's list of most cited climate authors. There are others who have worked on the climate problem who have far more in the way of credentials. In 1998 Mann was only an "assistant professor", not even tenured, probably paid $40,000 a year or less. I don't think he owes you or anybody any more than he's given.

But I believe he at least *tried* to make his data and methods available - his methods were published but incomplete, and he clarified them more completely later. He had somebody send his data to McIntyre for reproduction, but apparently there were some errors in what was sent; McIntyre sees deliberate ill-will in that response, and surely there was at a later date thanks to episodes such as the one McIntyre just inflicted on me. But it all looks to me like a scientist trying to do the right thing. His work has since been vindicated by others in any case, where the data and methods are fully open.

If you think the discussion here indicates the "science is a long way from settled" you are mistaking nitpicking about a tiny branch on one tree to the entire forest. Yes there are always questions that can be raised (I'm sure even about your own research from long ago - are you really sure you fully disproved your hypothesis?) but on the basic questions of CO2 increase, warming, and human causation of both there is no remaining serious question.

In any case, unless you have a specific complaint about some piece of science you have actually read about and investigated in some way yourself, I'm considering our discussion on this at an end; we've wandered way off topic. Thanks.

A snapshot of the dispute --

A snapshot of the dispute -- I posted a recent comment at Climate Audit (time-stamped Jun 25, 2010 at 11:21 AM) as an attempt to summarize the state of the discussion circa mid-day on 6/25/10.

I inquired of Steven Mosher whether it correctly reflects his perspective, his answers are time-stamped Jun 25, 2010 at 1:02 PM.

Same question to Arthur Smith: have I caught the gist of the areas of agreement and disagreement between the two of you?

Sorry, I've been working all

Sorry, I've been working all day, haven't had a chance to go revisit the climateaudit thread; I'll take a look tonight if I have time, or over the weekend.

My earlier comment "A

My earlier comment "A snapshot of the dispute" (in the moderation queue at this writing) gave a link to a summary comment I'd left at Climate Audit, and invited a response from Arthur.

That invitation has been rendered moot, as Steve McIntyre closed the relevant thread there. For "food fight" reasons only peripherally related to the central issues under consideration.

Arthur, maybe you find some of the points merit confirmation, rebuttal, or elaboration.

Acrimony aside, there has been some useful bridging of the gap on this, I think. I have a clearer grasp of the issues Arthur raised, and of what Steve Mosher sees as the underlying questions.

AMac - thanks, I finally got

AMac - thanks, I finally got a chance to read through that entire thread again. I would have had a few things to add, but really, I think we were pretty much at an impasse. Bender isn't a particularly useful go-between, it seems to me. Why'd Steve Mosher pick him to ask me questions? Always with the Mann-Jones stuff.

I simply haven't looked into what Mann, for example, did in any detail. I've read some of the claims McIntyre has made, I've read summaries of Wegman, and I've also read summaries of the NAS report and justifications from Mann and RealClimate people. It leaves me confused, but (A) Mann seems willing to admit to not doing things quite right in his first few papers, while (B) the weight of expertise on the subject now seems to be that Mann's results were largely right. The fact that Loehle did a reconstruction of his own that was at first non-hockey-stick-like, was then shown to be badly in error and ended up with something pretty hockey-stick-ish suggests to me the paleo-climate temperature reconstructions are pretty robust at this point. In particular the IPCC AR4 Fig 6.10b graph looks like an extremely fair representation of the state of our understanding at least of northern hemisphere temperatures over the last 1000 years or so. If McIntyre had actually come up with a real counter-point reconstruction of some sort, with details on things and where he thinks for example MWP temperatures should have been, well, I would find that a very useful contribution. But he doesn't seem to have ever done that.

As far as Jones goes, I had never heard of Phil Jones before last November. I'd heard of the Hadley-CRU and CRUTem temperature record, but never knew what CRU stood for. The one email where Jones told people to delete emails was clearly wrong. Otherwise, to me he's a nobody, and I haven't heard of anything specific he did that seems to have had any negative real-world consequence of any significance.

On your summary of the state of things - it seemed pretty fair. As MikeN points out below, I was planning to look into the TAR graph as well as AR4 but I only got to AR4 in this analysis. McIntyre's statements about the TAR graph seem to be clearer, but he's relying on analysis by UC, who I've been in contact with, and it's not clear UC ever did a very thorough comparison of the possibilities. So I'm unconvinced that there was anything wrong with the TAR graph, but also not convinced there wasn't something wrong as Mosher originally asserted (and still seems to be).

But after this ClimateAudit episode I'm feeling a little un-motivated to do any further looking at hockey stick data for a while. We'll see.