Steven Mosher: even Fuller of it

[UPDATE - June 24, 2010: the following text has been slightly modified following some discussion at ClimateAudit and in particular a detailed explanation from Steven Mosher of what he did wrong. Changes are indicated by strikethrough for deletions and bold for additions].

When people are obviously wrong it doesn't take much time or effort to dismiss their claims. When Joe Barton apologizes to BP we know he's spouting nonsense. When Gerhard Kramm gets simple integrals and averages confused it doesn't take much effort to convince anybody other than Kramm where he went wrong. When Tom Fuller blusters about quantitative meta analysis, Gish Gallops, and alternate universes you can tell he has trouble with logical coherence.

But the tricky cases are those who are much more subtle in their nonsense. Making stuff up is easy. Making stuff up that on the face of it looks somewhat plausible does take a bit more skill. Figuring out that the "plausible" stuff is just as much nonsense as the obviously wrong takes considerably more work, and some of these actors tend to make a lot of work for those of us trying to defend real science. One of the most skilled in creating plausible nonsense is Christopher Monckton. Prof. John Abrahams is the latest of us to take on Monckton's fabrications, and collectively thousands of hours have surely been spent tracking down the ways in which Monckton has misrepresented science.

Brian Angliss has recently put a lot of effort into tracking down the basis of some of the claims regarding "climategate", in particular looking at the implications of malfeasance on the part of the scientists whose emails were stolen. Many of these the conclusions Angliss examined were claimed at the website ClimateAudit, and in particular in a book published by Steven Mosher and Tom Fuller. There followed an extensive thread of comment including from Fuller and Mosher, and a response from Steve McIntyre at ClimateAudit that clarified some of the claims prompting Angliss to revise his article to attempt to correct his own mistakes.

The first discussion point in Angliss' review of the claims and in the ClimateAudit back and forth with Mosher and Fuller is the meaning of the "trick" to "hide the decline" phrase found in the stolen emails. This has been adversely interpreted in a couple of different ways but the actual meaning has been clearly identified as the process of creating graphs that do include tree-ring-based temperature "proxy" data only up to 1960, or 1980, a point where they start to diverge from temperatures measured by instrumental thermometers. There is nothing scientifically nefarious or "wrong" about this - the "divergence problem" has been extensively discussed in the scientific literature including in the text of the most recent IPCC report. If you have reason to believe a particular collection of tree ring data is a good measure of temperature before 1960 but for some still uncertain reason not after that point, then it's perfectly legitimate to create a graph using the data you think is reliable, particularly if these choices are all clearly explained in the surrounding text or caption.

Figure 2.21 from IPCC TAR WG1

Figure 6.10b from IPCC AR4 WG1

What's definitely not legitimate is presenting a graph that is specifically stated to be showing one thing, but actually showing another. That might happen just by accident if somebody messed up in creating the graph. But the ClimateAudit discussion and Mosher/Fuller book appeared to claim that in one figure in the 3rd IPCC report (TAR WG1 figure 2.21, 2001) and in one figure in the 4th report (AR4 figure 6.10b, 2007) there was a real instance where "the scientists had actually substituted or replaced the tree ring proxy data with instrument data" deliberately, for the purpose of "hiding the decline". As Angliss cited, McIntyre definitely uses the word "substitution" (but Angliss was apparently wrong that McIntyre did this in the IPCC context), and Fuller highlighted a portion of the Mosher/Fuller book using the word "replaced". McIntyre later clarified that his claim was not related to these IPCC figures but rather something else. However, Steven Mosher in comment #7 on Brian's article at June 8, 2010 at 12:34 pm stated very clearly that he knew what the trick was and that this substitution/replacement was used for the IPCC figures:

you wrote:

"Looking closely at the graph shows that the tree ring data was neither replaced nor substituted. The zoomed-in version of IPCC TAR WG1 Figure 2.21 at right shows that the instrument data starts around 1900 (red line, red arrow added) while the tree ring data ends at around 1960 (green line, green arrow added). If the tree ring data after 1960 were simply substituted or replaced as McIntyre and Fuller claim, then the instrument data would have been appended to the end of the tree ring data or the instrument data would be shown in green in order to maximize the potential for misinterpretation. Neither is the case."

The TAR is the third Report. We are talking about the FAR. figure 6.10. But I can make the same point with the TAR was with the FAR. You clearly don’t know how the trick works. Let me explain. The tree ring data POST 1960 is truncated. That is step 1. That step is covered in the text of chapter 6 ( more on that later ) The next step is to SMOOTH the data for the graphical presentation. The smoothing algorithm is a 30 year smooth. Whats that mean? For example,
if you have data from year 1 to year 100, your first data point is year 15. Its value is the combination of the 15 PRIOR YEARS and the 15 Following years ( for illustration only to give you an idea how centered filters work) your LAST year is year 85. This year is the combination of the prior 15 years of the record and the last 15 years. year 86 has no value because there are not 15 following years. So with a record that goes to 1960 your SMOOTH with a 30 year window should only display up to 1945. The problem of end point padding ( what do you draw from year 1945-1960) has extensive literature. So for example, there is extending the means of adjacent values at both ends of the smooth. ( the proceedure used in Ar4 ch06) In the case of Briffa’s curve, this procedure was not used. It was used for all the other curves, but in Briffa’s case it was not used. To fill out the filter, to supply data for 1945-1960, the INSTRUMENT SERIES was used.
This has been confirmed by replication. So still, after all this time people do not understand the trick because they have not attended to the math.

1. the series is truncated at 1960.
2. a smoothing filter ( typically 30 years) is applied.
3. To compute the final years of the smooth ( half the filter width) the temperature series is used.

That procedure is the trick. in a nutshell. If you want directions read Jones’ mail.

So Steven Mosher here claims that the "trick" was to use the instrumental data for "end point padding" in the 1960-truncated Briffa (2001) series used in IPCC AR4 Figure 6.10b (and presumably in the similar series in the TAR figure 2.21 Brian Angliss looked at). So that, despite claims to the contrary, in the IPCC reports Mosher claims they really did substitute/replace tree ring with instrumental data. And in a way that was concealed to the public - in particular, the caption of figure 6.10b specifically states what the end-point padding was:

“All series have been smoothed with a Gaussian-weighted filter to remove fluctuations on time scales less than 30 years; smoothed values are obtained up to both ends of each record by extending the records with the mean of the adjacent existing values.”

Similarly in TAR figure 2.21 the end-point padding is stated as:

“All series were smoothed with a 40-year Hamming-weights lowpass filter, with boundary constraints imposed by padding the series with its mean values during the first and last 25 years.”

Mosher is claiming a very specific procedure was used for smoothing that differs from that stated in these figure captions. I asked what the basis was for this claim, but no particular email from the scientists emerged to explicitly support Mosher's claim, and the closest thing to any analysis of the problem were pointers to this thread at ClimateAudit where, if the above endpoint padding procedure was examined, it's certainly not clear from the discussion.

One of the commenters pointed to the difference between the Briffa 2001 curve in the AR4 Figure 6.10b figure and in this NCDC page on the reconstructions:

Briffa 2001 reconstruction with others from NCDC

And indeed you see the Briffa curve (light blue) drops down a bit precipitously in the NCDC figure close to its endpoint in 1960, while the IPCC AR4 figure doesn't drop nearly so far - here's a closeup of the IPCC version:

Figure 6.10b from IPCC AR4 WG1 - closeup on the endpoints

So why are these different? While the differences in the individual curves common to both figures seem rather minor visually, there definitely looks like a problem with handling of smoothing near the Briffa 2001 end point in 1960. But note that the NCDC figure actually doesn't specifically state what endpoint padding was used for its graphs - it only says "All series have been smoothed with a 50-year Gaussian-weighted filter". Perhaps Mosher is right, that the NCDC figure uses the nearby-mean endpoint padding that the IPCC figure claimed to use, while the IPCC figure uses the instrumental data for padding, contrary to its specific claim about padding with the mean? If Mosher is right, that means the scientists really did conceal what they were doing here, and the figure caption for figure 6.10b (and presumably for the TAR figure as well) was a lie.

A commenter (Glen Raphael, #112) at Angliss' post thought proof of Mosher's point was that nobody had debunked it yet:

Perhaps an even stronger bit of evidence is that we haven’t seen Mosher’s claims “debunked” by any of the usual suspects. If his account were incorrect and there were some innocuous alternative way to generate the same graphics, don’t you think we’d have heard about it by now? Wouldn’t a rebuttal have shown up in gloating posts or comments at Deltoid, DC, RealClimate, Tamino, or all of the above? I think it’s safe to say *if* these claims were false they’d be easy for somebody with access to the data to debunk and it’s also safe to say that if they *could* be debunked that would have been done

Well, maybe nobody has actually seen Mosher state what he's talking about so clearly before. But now that he has, yes, it should be easy to debunk. Let's take a look.

The raw data for the NCDC graph is available for download via the page linked above. And here's what it looks like:

Raw (unsmoothed) NCDC data (R code here)

Now let's apply a Gaussian smoothing filter with nearby-mean endpoint padding as both the NCDC and IPCC AR4 figures claimed - though I don't know exactly the parameters or equation they used, I found a functional form that seems to roughly reproduce the main features of the curves in those figures:

NCDC data with Gaussian smooth (10-year) + padding with mean of first/last 15 years (R code here)

The Briffa 2001 curve (blue, ending in 1960) looks remarkably like the curve in the IPCC AR4 Figure 6.10b above. Let's look at close-ups near the end-point:

Magnified view of smoothed NCDC curve, and IPCC AR4 Fig 6.10b (R code here)

The Briffa 2001 curve is blue on the left (from NCDC data) and light blue on the right (IPCC). While the right-hand light blue curve is a little hard to see under all the others, it seems to peak at very close to zero, slope down and then tail off to flat right around -0.15 in both cases. Comparing to the Briffa 2000 curve (green on the left, darker green on the right) the Briffa 2001 endpoint is just a little above the bottom of the valley of the Briffa 2000 curve in both cases. I.e. a pretty good match.

But then, what explains the NCDC figure which had the Briffa 2001 data definitely heading rapidly downwards at the end? As I mentioned, the NCDC page doesn't say how endpoints were handled in that figure, so we'll need to do a bit of guessing.

The first natural possibility is that the endpoints were padded with the very last (or first) point in the series - rather than taking the mean of 15 or 25 points near the end, using the very last point alone. Doing that with the raw NCDC data gives this figure:

NCDC data with Gaussian smooth (10-year) + padding with first/last year data (R code here)

Oops! That looks even less like the NCDC graph, and not like the IPCC graph either - here's a close-up on the end:

Magnified view of smoothed NCDC curve with endpoint padding, compared with nearby mean padding (R code here)

The Briffa 2001 data now has a new valley at about -0.1 and then curves up - the reason for this is that the 1960 endpoint has a value of 0.076, much higher than the typical (negative) values in earlier years. So that one endpoint pulls the whole curve up when you pad and smooth in this fashion.

So clearly the NCDC graph didn't use the IPCC padding or simple endpoint padding for the Briffa 2001 data. One likely alternative was that they used the original Briffa 2001 data extended beyond 1960 for padding, despite the fact that the scientists had concluded it was no longer a valid temperature proxy after 1960. The NCDC data doesn't include the post-1960 Briffa 2001 data, and I couldn't find it in a quick search, but looking elsewhere it appears the data quickly falls to somewhere around -0.4 degrees temperature anomaly. So I took the simple expedient of padding the Briffa 2001 curve with -0.4's:

NCDC data with Gaussian smooth (10-year) + padding (Briffa 2001 only) with -0.4 (R code here)

and here's a comparison of the endpoint region:

Magnified view of smoothed NCDC curve with -0.4 padding (for Briffa 2001), compared with nearby mean padding (R code here)

This indeed looks very much like the original NCDC figure. So the unstated padding in that figure that brought the Briffa 2001 curve down so much was very likely use of the original Briffa 2001 curve beyond 1960, while chopping off the smoothed curve in 1960. That's perhaps justifiable, but a little inconsistent with the statements concerning the source of that data.

So it's pretty clear that the difference in endpoint smoothing between the NCDC and IPCC figures does not require grafting the instrumental data onto the tree-ring data as Steven Mosher claimed. But what do the graphs look like if you do that grafting?

NCDC data with Gaussian smooth (10-year) + padding (Briffa 2001 only) with instrumental data (R code here)

and comparing the endpoints region again:

Magnified view of smoothed NCDC curve with instrumental padding (for Briffa 2001), compared with nearby mean padding (R code here)

While the difference is small, you can see that the instrumental-padded curve flattens out more quickly than the mean-padded curve, and never goes much below -0.1 in temperature anomaly, while the mean-padded curve does go lower. Only the mean-padded curve matches the behavior of the IPCC AR4 WG1 Figure 6.10b illustrated above.

So, conclusively, despite Mosher's claims of certainty on what the scientists did, he was wrong.

But it sure takes a lot of effort to prove that one claim wrong, doesn't it?


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

One more thing I would have

One more thing I would have said at CA if the thread were still open.

I asked Steve how he planned to restore his credibility. His response was that:

1. I admit my mistake clearly.
2. I take full responsibility
3. I submit to questioning from my opponents.

Those are good steps, although I'm not sure why he thinks he has "opponents". Critics might be a better characterization. Or doubters.

But the one thing missing here that I was hoping for was a strong commitment to do better in the future. Scientists who have made errors (such as Mann clearly did in at least a couple of ways with his original "hockey stick") commit to not repeating those mistakes, and as far as I can tell, those commitments have been taken seriously and fulfilled. That's what builds credibility.

Furthermore, in his explanation of his "mistake", he said something about atypically avoiding "nuance" in his strong statement of fact that turned out to be at least partly wrong. Scientists generally try to avoid "nuance" in everything, to be absolutely clear in their statements, to make statements of fact that can be followed up on and verified. What I would have really liked to see from Steve Mosher was a specific commitment to:
(1) continue to make quantitative detailed and falsifiable statements, and avoid the shelter of "nuance" and subtlety
(2) when doing so, make every effort to not make strong statements for which you do not have sufficient evidence to support them.
(3) stick around to be questioned about such statements, and provide the supporting evidence when questioned.

That would restore credibility. Just admitting when confronted and responding to questions doesn't cut it, at least for me.

Arthur, "But the one thing


"But the one thing missing here that I was hoping for was a strong commitment to do better in the future. Scientists who have made errors (such as Mann clearly did in at least a couple of ways with his original "hockey stick") commit to not repeating those mistakes, and as far as I can tell, those commitments have been taken seriously and fulfilled. That's what builds credibility."

That seems like a reasonable request. I would note, however, that there remain a few issues WRT to Mann,for example, where he doesn't live up to your standard. They are notable. but this would derail the conversation. So, you have my
commitment. Normally, I would ask you to demand the same behavior of Mann or Jones. I won't. Whether
you hold others to the standard you propose for me is immaterial. Your standard is one that I'll accept because I would demand
it of others. I won't even hold you to your own standard ( you've said one thing about me at CA which is wrong based on
certain assumptions you hold.) You go look and see if you can find it.

"Furthermore, in his explanation of his "mistake", he said something about atypically avoiding "nuance" in his strong statement of fact that turned out to be at least partly wrong. Scientists generally try to avoid "nuance" in everything, to be absolutely clear in their statements, to make statements of fact that can be followed up on and verified."

I think this would make an interesting assumption to challenge, especially using the mails. You will find instances for example where they discuss hiding certain errors ( minor errors of using the wrong figures, figures they knew to be wrong. and where they discuss 'hiding' corrections in obscure journals so as to avoid embarassment.) But again, that would derail the conversation, and I don't expect you to correct your statement above. Just do some more reading. Then ask yourself, am I willing to hold everyone to this standard, and demand the same of them. I don't expect you to. This is not an issue about the truth of the science. never questioned that. It's rather about how we treat and communicate uncertainty. So, I've not been perfect in that. I expect that
you will have nothing detailed and critical to say in any way shape or form about other parties. I expect that you will never find a way to put a variety of behaviors to your test.

I think nuance was the wrong world for me to use. Let me illustrate the problem. When you talk about the "trick" the "nuanced" way to discuss it ( in my mind) is the one that is full of all the detail, every nuance every different detail. Which methods have been fully replicated, who did what exactly. The strong way of putting it OVERSIMPLIFIES, all the detail: "they truncated data nd apended temperature series." I think when I tried to explain the difference between these two approaches I referenced the difficulty I've had when for example I've tried to make a nuanced argument that Jones is not guilty of fraud. That's the context you need to understand the use of my term. So for me when I said "nuanced" I meant full of detail and distinctions.

What I would have really liked to see from Steve Mosher was a specific commitment to:
(1) continue to make quantitative detailed and falsifiable statements, and avoid the shelter of "nuance" and subtlety

Framed this way, I can agree. Again, I don't expect you to hold yourself to the same standard or to hold others to the same
standard. The issue is whether or not the entire issue of making the "uncertain" appear more certain is on the table with regards to ch06. Again, I expect people will not want to discuss this:

I fully expect that when I have finished answering your questions, you will not bring the same approach to every other topic. And I expect that you will have a valid reason to avoid those investigations. Again, I'll accept your definition of nuanced here even though it was not what I was trying to point out.

(2) when doing so, make every effort to not make strong statements for which you do not have sufficient evidence to support them.

Agreed. Again, I won't hold you to your own standard, nor will I ask you to hold others to it.
(this would just be a start. )

(3) stick around to be questioned about such statements, and provide the supporting evidence when questioned.

This is something that we routinely ask scientists to do at climate audit. The general response is that they don't like
the treatment they get there or some such thing. Since I rarely comment at places other than CA and Lucia's, I think I'd rather handle this by not making comments at other places in the first place, if that's all right with you. My experience is that the people who like to ask me questions, typically don't want to answer questions themselves or they do so in a cursory manner. That's my experience. If I had a reasonable expectation that you would treat the mistakes of others as you have treated mine, and hold them to the same standard, then I'd gladly return here to answer more of your questions. . So I have no issue answering all of your questions, but I do think I can reasonably suggest alternative forums . Failing that Amac is a excellent intermediary and he summarizes things well without engaging in snark. It might do well for him to compile a list of agreements and open issues. "

That would restore credibility. Just admitting when confronted and responding to questions doesn't cut it, at least for me.

"Anyway, perhaps Amac can compile a list of your demands. And post them in a nice tidy way. Then I can agree to each and every one. And then you can explain (or not, its not required) that while you certainly would hold everyone to the same standard that
it's not your job to do so. That's the best approach for you to take. But having your standard detailed and agreed upon would be a good thing. So, work out the entire standard with Amac, we will call it Smith's rules. You need not, nor do I expect you to apply this standard to others. But its good to have your full standard written down and agreed to by you. I expect that you will drop a requirement to answer questions from critics. I expect you will drop a requirement to supply supporting evidence or data.
I expect you will not have a requirement for crediting those who point out your errors." nevertheless having your standard laid out and showing how somebody complies with your standard or even exceeds your standard will be an instructive thing for all the people who discuss these issues. I think it's an excellent idea and will be an excellent tool.

Ps I'll check back to this comment in a few days.

On the "my standard"

On the "my standard" question, your comment here seems a little jumbled, did my list of 3 additional criteria meet your expectations, or are you looking for something else?

I agree that I misinterpreted what you meant by the word "nuance" and I apologize for that - detail and specificity is good, not bad.

Here's what I'd really like to see from you or any of the folks who may still be paying attention: what is your best case against any of the climate scientists (Briffa, Mann, Jones, Hansen, Schmidt, whoever)? By that I mean something that satisfies as many as possible of:

(1) A result (graph, table, number) presented in a peer-reviewed or IPCC article that was false - i.e. said to be one thing, but was actually something else. Incomplete presentation is not sufficient - I want to see something that was actually false (such as this AR4 case would have been if it had worked out). Truncation doesn't count unless they claimed to be presenting a whole series and clearly actively concealed the truncation. End-point smoothing doesn't count (for example the Briffa 2001/NCDC graph) unless they specified how they were handling the endpoints and did it differently. Etc.

(2) Where the falsity made a material difference to the overall message of the graph, table or number. That is, the visual or mental impact of the difference is obvious to a cursory look at the presentation, and doesn't require detailed magnification of the curve or looking at the last decimal point to see any difference.

(3) Where the problem, identified by blogger or scientist or whoever, has been presented in a clear manner demonstrating the "wrong" and "right" versions for all to see

(4) Where the original scientific group responsible has not responded with acknowledgment of the error and corrected the record as far as possible, and committed not to make the same mistake again

(5) Where the original group has in fact repeated the error more than once, after public disclosure of the problem.

Since I actually work for peer-reviewed journals I'm familiar with a couple of cases of scientific fraud, and at least the first 4 criteria were met in each case. Generally, at the point of public disclosure the scientist involved has been fired, even while not admitting of any wrong-doing, so they haven't had an opportunity to go to (5). But the claims of fraud in climate science go back at least a few years and everybody seems to be still publishing. So if the claims are real, you guys should be able to find a case with all 5.

If you can only meet the first 2 I would be willing to put together the public case #3 - but it had better be air-tight.

Interesting, I think

Interesting, I think Tiljander meets criteria 3-5. It was used upside-down again after McIntyre published on its improper use.
Darrell Kaufman had the sense to correct his upside-down usage.

The problems with Mann 08 are so huge that I'm not sure Tiljander meets criteria 1 or 2 though. AMac?

The TAR graph in question is accused of meeting 1-4.

Tiljander/Mann Fraud?

Tiljander/Mann Fraud?

...Short answer: No, But.

This freestanding comment is a Reply to MikeN's "Interesting, I think" (Sun, 6/27/2010 - 00:36) and Arthur Smith's "On the "'my standard' question" (Sat, 06/26/2010 - 18:26). This seeming side-issue may illuminate some of the points being discussed with the termination of the Briffa series in 1960.

Arthur listed 5 criteria in "On the 'my standard' question". Paraphrasing,

(1) A false result is presented in a peer-reviewed article or IPCC report.
(2) The falsity made a material difference to the overall message of a graph, table or number.
(3) The "wrong" and "right" versions of the identified problem have been presented in a clear manner.
(4) The authors have not acknowledged and corrected the error, or committed to not repeat the mistake.
(5) The authors have repeated the error, after public disclosure of the problem.

Two definitions for "Fraud":

a: deceit, trickery; specifically: intentional perversion of truth in order to induce another to part with something of value or to surrender a legal right
b: an act of deceiving or misrepresenting: trick

We're in tricky [sic] territory already: accusers can mean (or claim to mean) that they're discussing "misrepresentation", but the charge of evil intent is present or nearby. Lack of care and precision in statements made by scientists and advocacy bloggers is one of the major polarizing factors the AGW dispute, IMO. Steve covered this ground nicely in Climategate: Not Fraud, But 'Noble Cause Corruption' (also note the cries for blood in the comments).

It's tractable to evaluate what somebody wrote in a journal article, much less so to ascertain what was in their heart at the time of writing. To me, this says most "fraud" charges will be either wrong or unprovable. They'll always be red flags to a bull (bug or feature?).

As described in the Methods and SI of Mann08 (links here), Prof. Mann and co-authors set out to catalog and use non-dendro proxies that contain temperature information. They assembled candidates and looked at behavior over the time of the instrumental record, 1850-1995. During this time of warming, the calculated mean temperature anomaly in most CRUtem cells (5 deg longitude x 5 deg latitude) rose. Proxies with parameters that also rose passed screening and progressed to the validation step (see the paper). The four measures (varve thickness, lightsum, X-Ray Density, and darksum) taken by Mia Tiljander from the lakebed varved sediments of Lake Korttajarvi, Finland also passed validation, and thus were used in the two types of paleotemperature reconstructions (EIV and CPS) that make up the paper's results. The authors recognized potential problems with the Tiljander proxies, but used them anyway. Because of their length (extending much earlier than 200 AD) and the strength of their "blade" signal (Willis Eschenbach essay), the proxies are important parts of the reconstructions.

The evidence is overwhelming that Prof. Mann and co-authors were mistaken in their belief that the Tiljander proxies could be calibrated to CRUtem temperature anomaly series, 1850-1995. The XRD proxy discussed here. The issue was recently raised again by A-List climate scientist and blogger at Collide-a-Scape, The Main Hindrance to Dialogue (and Detente). Gavin and Prof. Mann's other allies are unable to address the matters of substance that underlie this controversy; see my comment #132 in that thread.

Arthur's 5 Criteria and Mann08/Tiljander
(0) Mann08's use of the Tiljander proxies is not fraud, IMO. All evidence points to an honest mistake.

(1) False result presented in a peer-reviewed article? Yes.

(2) Falsity made a material difference to the overall message of [graphs]? Hotly contested. Mann08 has many added methodological problems, making it difficult to know (see comment #132 and critical posts linked here). IMO, this demonstrated failure of key Mann08 methods (screening and validation) calls the entire paper into question.

(3) Clear presentations of "wrong" and "right" versions of the identified problem? Hotly contested. Gavin believes that the twice-corrected, non-peer-reviewed Fig S8a shows that errors with Tiljander (if any) don't matter. I rebut that in comment #132 and in this essay.

(4) The authors have not acknowledged and corrected the error, or committed to not repeat the mistake. Yes. In their Reply published in PNAS in 2009, Mann et al. called the claims of improper use of the Tiljander proxies "bizarre."

(5) The authors have repeated the error, after public disclosure of the problem. Yes. Mann et al. (Science, 2009) again employed the Tiljander proxies Lightsum and XRD in their inverted orientations (ClimateAudit post); see lines 1063 and 1065 in "1209proxynames.xls" downloadable in zipped form from (behind paywall).

Summary, and Lessons for the Briffa Series Truncations
The key issue is not fraud. Nor is it that authors of peer-reviewed articles make mistakes. Everybody--scientists, book authors, and climate-bloggers included--makes mistakes.

Instead, the important question is: Does climate science adhere to Best Practices? Appropriately, scientists and bloggers scrutinize articles that cast doubt on the Consensus view of AGW, as shown by Tim Lambert in the 2004 radians-not-degrees case. What about papers that support the Consensus view? Are such errors in those papers picked up? Do the authors correct those papers, too?

Best Practices don't mainly concern the detection of glaring, easily-understood errors like a radian/degree mixup or an upside-down proxy. There are a host of issues -- as there are with drug research, structural engineering, mission-critical software validation, and a large number of other areas. I won't enumerate them -- beyond a plea for the correct and rigorous use of statistical tools. Recent threads at Collide-a-scape are full of suggestions and insights on this question, from AGW Consensus advocate scientist Judith Curry, and many others.

The key to the Tiljander case is the defective response of the scientific establishment and the AGW-advocacy-blogging community. I think it teaches that paleoclimatology is a young science that has yet to establish Best Practices (as the concept is understood by other specialties, by regulators, or by the scientifically-literate lay public). To the extent that Best Practices should be obvious -- e.g. prompt acknowledgement and correction of glaring errors -- scientists' and institutions' responses merit a "D" or an "F" to this point.

Broadly speaking, I think scientifically-literate Lukewarmers and skeptics accept the analysis of the last few paragraphs. In contrast, opinion-setters in the climate science community and among AGW-Consensus-advocacy bloggers emphatically reject it.

IMO, these differing perceptions explain much of the gulf between the opening positions of Steve Mosher and Arthur Smith on the general question of the justification for the 1960 truncation(s) of the Briffa series, and on the specific question of Steve's error in ascribing the padding of the AR4 truncation to a splice with the instrumental record.

I've started a new post

I've started a new post featuring both your comment and Steve Mosher's below - further comments there, thanks!

Arthur's criteria make clear

Arthur's criteria make clear something I have been struggling to communicate. I don't think Jones or mann is guilty of fraud.
I've said this on a number of occassions. I've told skeptics NOT to overcharge the case. The reason is simple. It's not fraud.
And when you charge fraud, the underlying issue will get swept away. I HAVE NO ISSUE with appending a temperature series.
As long as it is FULLY DISCLOSED. As long as the CHOICE of doing things one way is accounted for by uncertainty calculations.
This applies to Tiljander as well. As long as mann CLEARLY describes that he uses portions of Tiljnaders data that Tiljander said were corrupted by human influence, and as long as he defends and measures the IMPACT of this decision, I have no issue. kaufman decided to NOT use certain portions of the record. that's a choice. That choice may or may not have consequences WRT the final estimate. That choice needs to be documented and defended. measured. explained. Same for briffa truncation. Why 1960 when temps continue downward for the next 10 years? What's the curve look like with and without? That choice changes our certainty measures. By how much? The rejoinder will be this: page limits. or do the study yourself. Both of these are beside the point. My observation is this: These papers do not fully document the uncertainties of the analytical decisions made. They dont point to supplementary material that shows the sensitivity were done. They overstate the certainty. Does that matter? I don't know.

I think this is a really

I think this is a really important comment. It lets me describe the central thesis of the book and our view of things.

What the mails detail is the creation of a bunker mentality. this mentality is best illustrated by some of the mails written by Mann. Essentially it is a vision of a battle between climate scientists and skeptics. Us and them. I put aside the question of whether this
mentality was justified or not. The important thing is that this mentality existed. Jones in an interview after climategate confirms the existence of this mentality. I do not think there is any evidence that contradicts this observation. The mentality existed. It is reflected in the language and the actions. What I try to focus on is how this mentality shapes or informs certain behaviors. We struggled a great deal with the language to describe the behavoir. Fraud was too strong a description. I would say and did say that the mentality eroded scientific ethics and scientific practices. it lead to behaviors that do not represent "best practices." These behaviors should not be encouraged or excused. They should be fixed.

When we try to make this case we face two challenges. We face a challenge from those who want to scream fraud and we face a challenge from those who want to defend every action these individuals took. Finding that middle road between "they are frauds" and "they did no wrong." was difficult to say the least. In the end its that middle ground that we want claim. The mails do not change the science ( said that many times in the book), but the behaviors we see are not the best practices. We deserve better science, especially with the stakes involved. If our only standard is the standard you propose, then I don't think we get the best science. I'll just list the areas in which I think the bunker mentality lead people to do things they would not ordinarily do. And things we would not ordinarily excuse.

A. Journals. There are a a few examples where the mails show the small group engaging in behaviors or contemplating behaviors that dont represent best practices.

1. Suggesting that "files" should be kept on journal editors that make editorial decisions you dont agree with
2. Influencing reviewers of papers.
3. Inventing a new category ( "provisionally accepted") for one paper so that it can be used by the IPCC.

B. Data archiving and access.
1. Sharing data that is confidential with some researchers while not sharing it with others. If its confidential, its confidential. If its
not, then its not.
2. Failing to archive data.

C. Code sharing.

1. Withholding code when you know that the code differs from the method described in the paper and correspondents
cannot replicate your results because of this discrepency. And you know they cannot replicate BECAUSE of this failure
of the paper to describe the code completely.

D. Administrative failures.
1. Failure to discharge one's administrative duties. see FOIA.

E. Failure to faithfully describe the total uncertainties in an analysis.

As you can see, and as we argue, none of these touches the core science. What we do argue is this. The practices we can
see in the mails do not constitute the best practices. I've argued at conservative sites that this behavior did not rise to the
level of fraud. And I took my lumps for failing to overcharge the case. On the other hand, those who believe in AGW (as we do), are unwilling to acknowledge any failings. We were heartened by Judith Curries call for a better science moving forward. We think
that the behaviors exhibited do not represent the best science. We think we can and should do better. The gravity of the issue demands it. So on one side we hear the charges of fraud . That's extreme. On the other side we hear a mis direction from the core issue. When we point out that best practices would require code and data sharing,for example, the answer is
" the science is sound." we dont disagree. What we say is that the best path forward is transparency and openness. Acknowledge that the decisions made were not the best and pledge to change things going forward.

Concern E is the heart of the matter WRT chap 6 of WG1. On our view Briffa was put under pressure to overstate the case.
That's not fraud. It's not perpetuating false statements. If you study the mails WRT to the authoring of that chapter you will come away with the impression that Briffa was under pressure to overstate the certainty. That doesnt make AGW false. It cannot. It is however a worrisome situation.

Here is Rind advising the writing team.
"pp. 8-18: The biggest problem with what appears here is in the handling of the greater
variability found in some reconstructions, and the whole discussion of the 'hockey stick'.
The tone is defensive, and worse, it both minimizes and avoids the problems. We should
clearly say (e.g., page 12 middle paragraph) that there are substantial uncertainties that
remain concerning the degree of variability - warming prior to 12K BP, and cooling during
the LIA, due primarily to the use of paleo-indicators of uncertain applicability, and the
lack of global (especially tropical) data. Attempting to avoid such statements will just
cause more problems.
In addition, some of the comments are probably wrong - the warm-season bias (p.12) should
if anything produce less variability, since warm seasons (at least in GCMs) feature smaller
climate changes than cold seasons. The discussion of uncertainties in tree ring
reconstructions should be direct, not referred to other references - it's important for
this document. How the long-term growth is factored in/out should be mentioned as a prime
problem. The lack of tropical data - a few corals prior to 1700 - has got to be discussed.
The primary criticism of McIntyre and McKitrick, which has gotten a lot of play on the
Internet, is that Mann et al. transformed each tree ring prior to calculating PCs by
subtracting the 1902-1980 mean, rather than using the length of the full time series (e.g.,
1400-1980), as is generally done. M&M claim that when they used that procedure with a red
noise spectrum, it always resulted in a 'hockey stick'. Is this true? If so, it constitutes
a devastating criticism of the approach; if not, it should be refuted. While IPCC cannot be
expected to respond to every criticism a priori, this one has gotten such publicity it
would be foolhardy to avoid it.
In addition, there are other valid criticisms to the PC approach....."

The PARTICULARS of this are unimportant. What matters is Rind's advice about treating uncertainties in a forthright manner.
All of our criticism of Briffa can be summed up in one sentence. He didn't do the most forthright description of the uncertainties.
That's it. whether it was his treatment of McIntyre's paper, or failing to disclose the truncated data in the clearest manner, that is the take home point we want to stress.

Hmm, interesting. That seems

Hmm, interesting. That seems slightly at odds with some of the things Steve McIntyre has been saying, but perhaps not. Hard to tell what he's talking about sometimes. I'll have to think about this.

You've been saying this a

You've been saying this a lot, and noone's responded. Yes the blog is very confusing, because he is basically speaking to people who've been reading him for a long time. It's a stream of consciousness blog. With only occasional summary items when readership spikes. It's difficult to understand what he's saying without reading plenty of old entries, which most people won't do. I think this feature also adds to the occasional acrimony on the blog.

I see climategate as a human

I see climategate as a human story, not a science story. As I state in the book, the mails DO NOT change the science. they cannot. They are just mails. words. They are not facts or models. They depict a human story. That story is the as follows. Mann, for whatever reason, feels embattled by the skeptics. he complains he is fighting the fight alone against a corporate enemy. Mann is advanced ahead of briffa.Mann writes the TAR. ( see McIntyre's heartland briefing) in the begining Briffa and Osborne are planning a publication that will be critical of mann. Jones, boss of briffa and co author to both briffa and Mann, is caught in the middle. Over time Jones comes to side with Mann and takes on his temperment WRT sharing code and data. In 2002 he shared data with Mcintyre, by 2005 he has joined mann and refuses data. briffa and osbornes never write the paper. in The FAR briffa is selected as the author. Overpeck pressures him to come up with a graphic more compelling than the hockey stick. Briffa complains that he should not be forced by Mann and solomon to push the certainty beyond that asserted in the TAR. During the course of the writing I think that its clear briffa is feeling the pressure. In any case on my view of things ( the screwtape analogy) Briffa is the innocent soul that the greater demon (mann) and the lessor demon (jones) are trying to corrupt. So for me the editorial decisions that Briffa makes are key. Mcintyre asked him to show the decline. briffa refused. That battle had been fought back in the WMO chart and the TAR. Briffa couldnt show it, as he had in his original papers. And then there was the matter of how Briffa would treat MM05. That is where the confidential exchanges with Wahl matter.

In all of this I see Briffa as a rather innocent party driven to do things he would not ordinarly do. I see Jones falling victim to Mann's black and white view of the skeptics. I see that tribal view of things as dangerous for science and open inquiry. It's a pattern of behavior that could lead to false science. It hasn't yet.

Steve had a great metaphor he shared with me once. Its like The climate scientists have an open and shut case on very strong evidence, and then they insist on bringing in this tangential evidence ( paleo) and that evidence is very uncertain, and the expert on that evidence, violates all sorts of proceedural technicalities, and the whole prosecution suffers, were irregularities in the weakest evidence are defended so vociferously, that the entire case is put into question. Where the case for AGW would be better made by just accepting that the paleo record is highly uncertain and focusing instead on the best evidence.

It's a view of things that CANNOT be heard in the current climate. For example. I was going to call the book "Noble cause corruption" The idea here was that the sceintists have a noble cause, but they slip into patterns of behavior that erode the basic sceintific ethics and practices. They have a guilty man. They have him dead to rights, but they start cutting corners here and there with proceedure. A dangerous trend. It starts with planting evidence on a KNOWN killer, and then over time the innocent suffer.
But We rejected that title. Why? because corruption was probably too strong of a word.

What is the result. We write " the result was a record of the science that asserts more certainty than the underlying science supports." That's it. But that's a really hard story to tell, and its a story nobody can hear. because the skeptics hear us saying that there is nothing false or fraudulent. And the strong proponents can't hear any critcism whatsoever, because they fear any flaw WILL be blown out of proportion ( and it will be).

So, Now perhaps you understand why I get frustrated with my own nuanced view of things.

I am in the middle of a bunch

I am in the middle of a bunch of things right now, so this post will be much briefer than I'd like. I'm going to try to just get a few quick points out, and I'll revisit them when I get a chance.

1) Bender was a horrible go-between. The behavior he exhibited in that thread is nothing more than that of a troll.
2) The problems with paleoclimatology are far greater than you believe. This is understandable, as the subject is lengthy, confusing and there is no real "introduction" to it. There are numerous issues which could be discussed, but the one which strikes me most is a misconception of yours. You say, "It leaves me confused, but (A) Mann seems willing to admit to not doing things quite right in his first few papers." The truth is Mann did not choose to admit errors. He was forced to due to the efforts of Steve McIntyre, and he never would have otherwise. In fact, he still refuses to admit many errors, including a number in his more recent papers.
3) The behavior you experienced at ClimateAudit is as shocking to me as it is appalling. I have read the site quite a bit, and I cannot remember ever having seen such behavior. A week ago I had basically nothing but praise for McIntyre and his site. I hope it was just some crazy fluke.

Regardless of anything to do with ClimateAudit, the second point is something worth looking into. There is far more to the situation than can be seen with just a cursory examination.

2) sounds interesting - is

2) sounds interesting - is there one detailed claim of an error in one of Mann's recent papers that somebody has made, and shown that the consequence of that error is something substantive? What I'd like to see would be, for example "graph of reconstruction with Mann's error corrected" vs. "graph of reconstruction by Mann" with some noticeable difference. I might be motivated to look into it myself then.

Thanks for your (1) and (3), really I appreciate having at least somebody who seemed to understand what I was trying to say.

I agree, that Bender can be

I agree, that Bender can be unhelpful. Indeed, at one point his comment to me was just 'go away.' In my opinion his questions were pretty good. The whole post started with Brian Angliss criticizing the context, Steve Mosher explaining some context, and Arthur analyzing a particular claim by Steve. Bender asks to compare which is the worse error.

Mann 08 is full of errors,

Mann 08 is full of errors, and ClimateAudit has diagnosed some of them. Mann has provided his code and data for this one, making it easier.
Coincidentally, one of the errors includes upside-down Tiljander. An error I think Mann has not acknowledged. McIntyre published a comment on it, and Mann replied allegation of upside-down behavior is bizarre.

MikeN - as I said, I'm really

MikeN - as I said, I'm really not interested in investigating Mann's stuff at this point, but that could change if somebody can point me to something specifically and apparently conclusively wrong and show it was in some way significant - if you have a pointer to a particular ClimateAudit page or elsewhere that clearly states and demonstrates what a (at least one) significant problem with Mann 08 is, that would be great.

That's fine, and I don't

That's fine, and I don't expect you to investigate Mann 08. It does not look easy, even given the page you put together here.

As near as I can tell, people started with Mann's code and data, and his described algorithms. Decided some of the proxies are flawed.
However, and overall before and after chart is not possible because it is the algorithm itself that is flawed.
Take 1200 proxies, then calibrate them to the temperature record, and average together. Sounds good, but the reality is that it means you are filtering out things with an uptick at the end, while the before period averages out to 0. You could get the same effect with random data. Bruger published a comment along these lines in PNAS as a response. Here are some links:

From ClimateAudit:
Plenty more starting around page 60.

Brandon, this behavior

Brandon, this behavior happens. I was the target of some of it as well. Look at the Kaufman threads.

AMac, I think you are off on

AMac, I think you are off on one point. Steven Mosher only conceded an error with regards to AR4, which is what AR4 chooses to test.
With regards to TAR, ClimateAudit believes, but has not conclusively proven, that TAR chart uses instrumental temperatures. A weaker level of confidence than Tiljander if you will.
I think Mann's RealClimate comment linked above gives evidence that the TAR chart was done in the matter described, as Mann was the editor for TAR.

Of Steven Mosher's original statement, WMO, TAR, and AR4, Mosher has retracted his AR4 claim of padding instrumental.
He is also in error about WMO, with regards to cutting off at 1960, but it is padded with instrumental temperatures.

So TAR remains. You seem to be good at this AMac. Can you locate the sources for the TAR graph?

Arthur, this exchange between

Arthur, this exchange between me and you in these comments is the way that I think these sorts of discussions should proceed. I'm not interested in the personalities, but in (1) the technical details, (2) what these details say about the appropriate confidence levels that scientifically-literate public should have in "the science" (or--in this case--in the Fuller/Mosher book), and then (3) what this may mean for climate science in general, and for policy. (I won't expand on the reasoning behind this perspective.)

You did some nice due diligence on a confusing topic, and came up with an apparent mistake in the Fuller/Mosher narrative. On the one hand I would have preferred less snark, but on the other hand you didn't add that much by climate-debate standards, and strong statements attract eyeballs. On the third hand, emotions are so inflamed that the climate-blog world is a tinderbox, not in need of even modest extra helpings of gasoline and matches.

(Look at what happened to the threads at Collide-a-scape, once Keith Kloor reluctantly placed some of the usual-suspect hotheads into moderation. The quality zoomed up, as some A-listers came and stuck around, and some B-listers turned out to be unusually insightful.)

IMO, Steve Mosher's performance in this affair has been "pretty good." If I grade on a climate-blog curve, he gets an A+.

Yeah, "opponent" was a poor word choice. Etc. But let's recall that he responded promptly, responded clearly, admitted error, provided additional context, and did so without spicing the dish with ad hominems and irrelevancies.

As far as his analysis of the late-20th-Century part of the Briffa dendro reconstruction as presented in the AR4 Chapter 6 figure, we can now all agree that he was wrong to assert that the 15 or 30 years of the smoothed curve up to 1960 were generated through the use of a splice with the instrumental record.

As far as I know, the remainder of his analysis of the use of paleo records in AR4 stands. Same for WMO. Same for TAR, with the exception of the analogous instrument-splice question (I'm unclear on that). As you note, the reader's confidence will be reduced by some amount, by the recognition that the authors were in error in this matter. Some of that confidence will be restored by Steve's subsequent conduct. These are, of course, typical YMMV issues.

Re-reading your response, some other things --

1) I cannot understand how it is even remotely acceptable for the Briffa series to be truncated at 1960. First: to me, this is an illustration of an issue that comes up time and again in the paleo reconstructions. "Statistics" is a set of mathematical formulas and procedures -- but it is also a way of looking at the world. Post hoc alterations to data and hypotheses are particularly troublesome, as they don't affect the numerical manipulations. So a mean, SD, r^2, P, whatever, can still be calculated by a faithful computer. But these numbers no longer represent what one thinks they represent. This was fully acknowledged over a decade ago in the pharma industry. Try a "trick" like this in a Phase III trial and it will be excluded from the regulatory filing. Your career path will inflect, and not the way you'd hope. Analogous examples from other disciplines. Why is paleoclimatology in such a backwards state? Why do the leading lights of climatology defend these insupportable practices?

Second: the premise of dendrochronology is that tree rings (scratch that: rings from particular, selected trees (red flag!)) can be calibrated to the instrumental record, and that the relationship that is established can be used to hindcast temperatures. So: what is the prior hypothesis that allows climate scientists to disqualify the 1960-2000 decades of the calibration period? There is none! The reasoning starts with "after 1960, it didn't work the way it should have with those trees." See "First:", supra.

2) You mentioned Prof. Mann and his work, and then Tiljander. Ay ay ay. The use of the Tiljander proxies by Mann08 is the poster child for what ails climate science. More exactly, the response of the climate-science and AGW Consensus advocacy communities to Tiljander-in-Mann08 illustrates the deep and unacknowledged problems that confront climate science and thus AGW policymaking. If you Googled my pseudonym, you knew I'd say this. If you didn't know: I'm curious, why'd you bring up Tiljander?

3) "On your summary of the state of things - it seemed pretty fair." (This ClimateAudit comment.) Thanks -- having parties acknowledge areas of agreement is constructive and important, I think. Too often, that gets neglected, 'cause it's "on to the next battle!" Note that Steve Mosher has replied to that comment, and is in broad agreement, as well. As far as the treatment of the late-20th-Century part of the Briffa data trace as portrayed in TAR: it seems that Steve, you, and UC ought to be able to come to a "consensus" on what was likely done.

4) Re: Steve, "But the one thing missing here that I was hoping for was a strong commitment to do better in the future." That seems to me to be implicit in his responses, but OK, you think it should be explicit. As far as the three specifics you go on to describe, they each sound good to me, though our perspectives differ. We live in a fallen world where book authors do make mistakes. The more ambitious the book, the greater the likelihood of errors. The biggest thing to me is to acknowledge those mistakes as they come to light. But you and I are grading with different curves, clearly.

5) "You seem to be good at this AMac. Can you locate the sources for the TAR graph?" Thanks for the compliment, but all I know about this is what I've just learned from you, Steve McIntyre, MikeN, Steve Mosher, and selected other commenters at ClimateAudit and here. (I have the Mosher/Fuller book, but it hasn't reached the top of the pile yet.) Regrettably, little help from this quarter on finding those sources. Have you tried asking Steve?

This is heading off topic,

This is heading off topic, but your comments do deserve some response. I don't want to continue in non-productive directions, so further comments that seem to me headed that way will not be approved. Matters of fact - including who said what, where - are fine. Matters of opinion (anything that includes the word "should" or moral terms such as "acceptable") are flame bait. And no more on Mann, please! [Exception, see my response to Brandon above]

But to respond on this, first:
(1) If the truncation is concealed or falsely claimed not to have been done, that's a problem, because it is a matter of fact that is in reality different from the way presented. But if the truncation is done openly, as seems the case here, then doing it or not is a matter of opinion on which I would defer to experts; I am not one and will really have no more to say on this here.

(2) I never mentioned Tiljander - where did you get that idea? [Edit - I see you were responding to MikeN here thinking it was me!] I know nothing about that other than it seems to be some sort of byword for something CA people have been complaining about lately. I never had any intention of getting into Mann's reconstructions.

(3) Yes I hope we'll get a definitive answer on TAR. I do wonder why, if it's been discussed so much at CA in the past, there isn't a clear post with the data from UC etc. showing precisely what his theory of what was done is, and why it matters. Not sure why it's up to somebody like me or you (5) to clear things up.

(4) Acknowledging your errors after they have been revealed is one thing. It's a good thing, but there is a far better way. But that's a matter of "should" again, and I can understand disagreement.

(5) That was MikeN...

The why it matters is in

The why it matters is in Steve Mosher's post and book.
Also, the TAR diagram does not have the truncation clearly described.
There is evidence, for example, that high latitude tree-ring density variations have changed in their response to temperature in recent decades, associated with possible non-climatic factors. This was located a bit away from the 2.21 figure.

Arthur, let's get your


let's get your principles down clearly.


You have a time series that you are using as a proxy for temperature. That time series is well correlated with the temperature series for 75% of its full period, with the final 25% ( or pick a number) correlated less well, lets say inversely. That is your raw data.
You hypothesize that this divergence is due to some other factor hitherto unexplained. You have some ideas, some tentative guesses, but you have no clear theory to explain the divergence. No math. no model. You make a decision to truncate the series where it diverges. You explain this decision. You explain that the divergence problem is something that may be explained in the future, but for now you will make the decision to truncate the data and analyze the portion that does correlate well. you believe that the part that doesnt corelate well will someday be explained by some theory. You don't believe the data is BAD. You believe it was properly collected and measured. You just have this divergence that current theory is at odds with. You trust that someday the theory will be expanded or modified to explain this divergence. It is a general practice in your field to archive data at an open repository.

1. Do you archive all the data so that future researchers can someday explain the divergent part or at least plot the divergent
part for themselves?
2. Do you archive the full series and the truncated series with appropriate metadata to explain the difference?
3. Do you only archive the truncated series.

or some other option. And then we can discuss how you decide where exactly to make the cut. where it first goes negative?
or any part of curve where it goes negative? is there uncertainty related to this decision? how do you faithfully depict the uncertainty WRT this decision, that is the decision of where exactly to cut a curve. Is that decision a certainty? If its not a certainty, what is the best way to show the uncertainty related to the decision? But that's to many questions. Lets first start with the simple one of preserving the scientific record. If there is no explanation for the diveregence, no model to explain it, do you ARCHIVE all the data or do you only archive the part that makes sense? pretty straightfoward question that everyone who works with data will understand. Archive all the raw data or not?

Obviously with modern

Obviously with modern recording techniques all data should be archived. (2) makes most sense but as you point out, there is some additional decision process involved so that others could make different choices on the truncated version. The most robust truncation choice would be the one with most predictive power for future replications/re-analysis of the work. Which would be a matter of scientific skill on the part of the group that gathered and/or analyzed the data, I would think.

But archiving is a different thing from publishing. Should all the data be public? That's a touchy question - look for example at astronomy where the Kepler mission just released its first exo-planet data. But they didn't release all of it - they may have found "706 possible extrasolar planets from Earth size up to a bit bigger than Jupiter" but what they released was "data on the 306 targets they’re least excited about." The other data is supposed to be released later, after the group responsible for the mission has a chance to better analyze it.

In this hypothetical case, I would think the non-truncated data set should be available to people investigating the "divergence problem" itself. But should it be published until that problem is solved? Not clear to me. It's an ethics question, on which opinions will inevitably vary.

But what I am interested in is actual errors - facts, not matters of opinion. My original "not legitimate" issue - "presenting a graph that is specifically stated to be showing one thing, but actually showing another". I was impressed with your original claim that there was such a thing in AR4 (and TAR). Do you now believe there was any instance of that in any of the IPCC reports, that you're aware of? Can you provide the evidence you have on that?

[Note to other commenters - since this post is basically about Steve Mosher, I'm considering all topics he raises on topic at least in reply to his comments. But don't go overboard - thanks!]

To close out on the archiving

To close out on the archiving question. If your field of practice is committed to open archiving. If you yourself have archived data and continue to archive data. If you believe that explaining divergence will require access to the divergent data. Then you would hold
that the divergent data should be archived. If you failed to archive the divergent data when you knew it was a contentious issue, when you had been officially requested to display this data in an IPCC report, how would you you judge the credibility of such a person. Since, here in the comments you have opined on what I should do to regain my credibility ( a question of shoulds and not facts of "who said what" in violation of your own rules for comments) I think this question is on topic. I think you will find that rules for comments that exclude normative statements are internally inconsistent.

in a nutshell. You agree that failing to preserve the divergence data is bad practice. yes or no.

If your statements on this

If your statements on this are fully accurate and you have not omitted some material fact (such as - were all the present archiving standards in place when the data was first collected or in its last known location?), then yes you have described "bad practice". But I have no way of confirming at least half of the things you have stated here. If you have specific references on a specific case, then state them. Thanks.

[Added slightly later] As to my comment policy - my blog, and my opinions are of course allowed :) Since this post was about you, your opinions are fine too. But everybody else, please stick to the facts!

If I am to understand your

If I am to understand your rules for comments. Your opinions are allowed, but other's are not allowed to have opinions in comments, or rather you will decide if what they write is an opinion or fact and allow that through moderation? is a question an opinion?
If you allow someone other than yourself to post an opinion, are we allowed to register a complaint? Will you allow such complaints through moderation or not?

Maybe - I'm just trying to

Maybe - I'm just trying to avoid pointless fights. I seem to have gotten a completely new crop of commenters here the last couple of days and I'm still figuring out what to do with them. Consider current policy a first draft.

"such as - were all the

"such as - were all the present archiving standards in place when the data was first collected or in its last known location?"
I'm not sure that 'present standards' have to be in place at the time of archiving. It would be enough to violate the standards that were current at the time, as present standards could be less stringent than past standards, not likely but logically possible. This also runs into the issue of having no practice whatsoever. on your view of things if the standard was to NOT maintain records of your data, then it would be good practice to not maintain records

To meet your standards of bad practice what material facts have to obtain? or rather what would constitute good practice?

I can't say I understand the

I can't say I understand the question. The basic issue is whether a person is meeting the expectations and habits of others in their own field, and that's a matter for judgment by those who are expert in the field. It would be very hard for outsiders to determine this, in my view. But if the policies and practices of the field are very clear and open as you suggest, then perhaps it would be as easy for anybody to distinguish good from bad. But generally the issue is the details of the specific case, and I don't think any general rule is either going to be universal or appropriate, filtered or constrained by subject however you may wish.

Science is complex and ever-evolving. A single paper can spawn a whole new field; a new discovery can collapse a wide spectrum of previous speculation onto a new understanding. What is appropriate one year may be totally obsolete a year later, given results or techniques or understandings developed in the interim. What matters is predictive power, which should be ever-improving. Good practice is whatever points things in that direction of improvement; bad practices would be things that stymie improvement for one reason or another.

"The basic issue is whether a

"The basic issue is whether a person is meeting the expectations and habits of others in their own field, and that's a matter for judgment by those who are expert in the field."

That clearly cannot be the standard. if it is the habit of everyone to NOT preserve the data that backs up studies then the person who does save data is NOT meeting the habits and expectations of others. if it is the habit of everyone to never share code, then the person who does share code is not matching the habits of others.

"Good practice is whatever points things in that direction of improvement; bad practices would be things that stymie improvement for one reason or another."

As you can see from what Deep Climate suggests, the post 1960 data is not in the public archive. It is only available because it exists in the stolen mails. It seems patently obvious that preservation of the divergence data is a LOGICAL requirement for the advance of science. I cannot fathom an argument that holds that science is ADVANCED by deleting unexplained data. We cannot explain data that is not preserved. Archiving the data prior to 1960 and failing to archive the data after 1960 seems to be an obvious mistake. Nothing is improved by FAILING to archive it. Failing to archive it, stymies anyone who wants to explain it in the future. The only copy (apparently) of the data that is available to others is the 'copy' that exists on the mails. Stolen as it were.

Still I'll remain open minded. What possible argument could you give that failing to archive the divergence data would be best practices. or rather, that archiving the data would be a mistake.

Well, interpret my "meet" as

Well, interpret my "meet" as "meet or exceed". If it's the habit of everyone to NOT preserve then the person who does is exceeding the standard of the field. We know which direction is "better". So that's good practice whether or not it's habit in the field. The question is whether not archiving is good practice.

Consider particle physics. I worked on one of those experiments way back when I was an undergraduate. The incoming data rate was so high it overwhelmed the computers we had at the time. There were layers and layers of filters to select out the data that was "interesting" to be recorded, and throw out the rest. And even that left us with tape after tape after tape of raw data for analysis; once the analysis was done, I don't think much effort was put into preserving the tapes themselves either. That required a great degree of understanding of the filtering processes and being honest about all that, many different scientists working together to review and critique the process. It seemed to work pretty well, but a *lot* of raw data was thrown out. That was the standard of the field.

Arthur before you start

Arthur before you start thinking about where to cut off the divergent series, you might want to consider the temperature series after 1960.from your code....

0.15, 0.05, 0.02, -0.23, -0.35, -0.06, -0.01, -0.18, -0.2, -0.02, -0.14, -0.3

Part of the reason why it makes sense to SHOW the decline in tree rings after 1960 and part of the reason why it would be important to examine the certainty of the decision to cut things at 1960. and part of the reason why preserving the full tree ring record in the archives is important.

Thanks for clarifying the

Thanks for clarifying the post.

I would love to prove it you

I would love to prove it you by pointing you to an online copy. But sadly, I don't believe such facilities were available in the late 1970s. I typed the thesis myself on an old manual typewriter. And I doubt if the Bodleian library has subsequently thought it important enough to transcribe and publish it online. If they have, they haven't bothered to tell me that they have done so. All the data and source code was put into the hardcopy document -- photocopied from 132 column music paper listing. Whether this was 'commendable' or not, is was the only way to demonstrate the exact methods I had actually used..and to show the examiners that my results were a consequence of what I had done...and not just created by own imagination.

If you'd like to send me an e-mail, I can provide you with the thesis name and you can approach the Bodleian to verify that my thesis was handled exactly as I have described. They may make charge, but they should be able to find it...the are obliged by British Law to do so.

I'm sorry that you feel we have wandered off topic.

The disciplines of transparency, opennness, external auditability and reproducibility are those that many of us who work outside academia are familiar with as day-to-day issues in our professional lives. It is the academic world's (and 'climate science's' in particular) arrant disregard for the importance of these processes that make many like me deeply suspicious that these results cannot be relied in.

Simply put, I would not believe a trainee programmer who came to me with an important result but could not (or would not) describe the method he had used to generate it. And the fact that all the guys he worked with thought that he as a good chap by 'peer review' wouldn't sway me much either. Immaterial how many prizes and awards and degrees he may have...describe your method..or its imagination. To suggest that some years later he tried to clarify the subject as a mark of good faith, but ballsed up the attempt is a mark of his incompetence, not of his openness. The fact remains that we still don't know what the guy did .. or says he did. I don't think he would survive very long in any semi-important job in the commercial world. I expect my admin clerks to be able to do a simple thing like that correctly without the need to supervise or check the work.

And so I will remain a sceptic. Thank you anyway for not suggesting that I am in the pay of Big Oil or other such nonsense. Ciao

AMac, that was my response.

AMac, that was my response. I know your position on Tiljander. I should have just said 'that other thing,' but wanted others to understand it.
I only bring it up to give you an idea of the level of certainty ClimateAudit has with regards to instrumental data in TAR Graph. Not as confident as they are of upside-down Mann, but still somewhat confident.

Your work with the Tiljander thread suggested to me that you could track down TAR data as well. I'm curious, did you ever analyze Mann's code in Mann 08 to see that it was used upside-down, as ClimateAudit claimed? I get the impression the answer is no.

Sorry for the mixup between

Sorry for the mixup between Arthur and MikeN.

> did you ever analyze Mann's code in Mann 08 to see that it was used upside-down, as ClimateAudit claimed? I get the impression the answer is no.

MikeN, briefly, the answer is No -- I've only taken baby steps in running R, and everything except that and Excel is beyond me. However, I believe that a line-by-line trackback of "Mann's code" is necessary. (1) Read Mann08's Methods for the principles behind their screening; (2) Look at Fig. S9 for the orientations; (3) Examine the plots and the reasoning and the quotes from Boreas (2003) here (I only walked through XRD but lightsum is the same). I have excerpted the CRUtem anomaly record for the Lake Korttajarvi gridcell and will post it sometime; that doesn't change the analysis.
This is as detailed as I can see from UC, and this analyzes MBH papers.

(2) Where the falsity made a

(2) Where the falsity made a material difference to the overall message of the graph, table or number. That is, the visual or mental impact of the difference is obvious to a cursory look at the presentation,

You still don't seem to understand the trick, beyond the technical details. The next part of Phil Jones's email was 'to hide the decline.' You emphasize that this is just 1.5% of the graph. As an aside I am curious if there is another blogger or scientist making that claim.

Looking at the leaked/stolen e-mails reveals what this trick is for.
"I know there is pressure to present a nice tidy story as regards ‘apparent unprecedented warming in a thousand years or more in the proxy data"
but in reality the situation is not quite so simple. We don't have a lot of proxies that come right up to date and those that do (at least a significant number of tree proxies ) some unexpected changes in response that do not match the recent warming. I do not think it wise that this issue be ignored in the chapter. Whole e-mail is fascinating.
everyone in the room at IPCC was in agreement that this was a problem and a potential distraction/detraction from the reasonably concensus viewpoint we’d like to show w/ the Jones et al and Mann et al series.

So, if we show Keith’s series in this plot, we have to comment that “something else” is responsible for the discrepancies in this case... Otherwise, the skeptics have an field day casting doubt on our ability to understand the factors that influence these estimates and, thus, can undermine faith in the paleoestimates. I don’t think that doubt is scientifically justified, and I’d hate to be the one to have to give it fodder!

But the current diagram with the tree ring only data [i.e. the Briffa reconstruction] somewhat contradicts the multiproxy curve and dilutes the message rather significantly.

All of these are from e-mails while the TAR is being put together on Sept 22.

Comment Viewing Options

Comment Viewing Options Note

The way comments are displayed on my system (Mac 10.6, Firefox) makes it hard to follow the conversations. I just noticed that at the bottom of the post (search for "Comment viewing options"), "Page length" can be set to "300 comments". This helps; other readers might want to take note. (I can't get the thread to move from "threaded" to "flat," which might help more.)

Did you try GreaseMonkey?

Did you try GreaseMonkey?

DeepClimate has now done a

DeepClimate has now done a similar comparison regarding IPCC TAR figure 2.21. It turns out:

(A) Mann really did use instrumental series to pad his own curve for the end-point smooth (MBH) - but he has admitted this and repented since.
(B) Mann definitely did *not* use instrumental data to pad Jones' curve
(C) Mann did not "truncate" Briffa's curve; it was Briffa's group that sent the truncated data
(D) Mann may or may not have used instrumental data to pad Briffa's curve beyond 1960; according to DeepClimate's results, whether he did or whether he used the smoothing specified in the figure caption makes a difference of 0.01 K over just the last few years of the graph, essentially imperceptible in the figure.

There are some more mysteries about that graph though - why was the Briffa curve shifted up a bit? Was the smoothing exactly as claimed? DeepClimate promises a part 2 later...