One of the great puzzles I feel up against these days in several different contexts is finding a clear way to express how wrong certain expressed views are. This is not (at least usually not) an issue of moral wrongness, but in most cases just simple inconsistency of logic, disagreement with basic scientific understanding of issues, or perhaps abuse of the English language in ways that make no sense whatsoever. Last fall I spent an inordinate amount of time documenting the errors in an article by a climate-change "skeptic", but even then the simple count of the problems doesn't feel like it gives a true picture of the enormity of the misrepresentation of the facts provided by the article in question.
And we all make at least minor mistakes, so a simple count of errors could easily gravely over-state the problems in one piece compared to another - the example of the few errors in Al Gore's presentation, for instance, produced all sorts of glee on the "skeptic" side. Attempting to count errors does give at least a minimal picture of "wrongness" - Tim Lambert had quite a go of it with the recent Ian Plimer book (which perhaps has been the rage only in Australia, but still interesting to follow). But there's definitely something missing in that measure.
On the low end, there weren't actually so many errors in George Will's piece that caused all the trouble earlier this year - but they were of such an egregiously misleading nature in such a high-profile venue that Will's wrongness score surely should measure far higher than almost any of those hundreds-of-error-count pieces. How would one measure it in any sort of objective fashion, though?
Perhaps ratio of errors to words is one useful measure. One is reminded of the part in Dashiel Hammett's "Golden Horseshoe":
ONLY GENUINE PRE-WAR AMERICAN AND BRITISH WHISKEYS SERVED HERE
I was trying to count how many lies could be found in those nine words, and had reached four, with promise of more...
But there's more to it than error count too. Some errors are big, and some are in themselves small. What got me thinking more about the quantification problem recently was a pointer to a wonderful essay by Isaac Asimov - The Relativity of Wrong:
The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern "knowledge" is that it is wrong.
but this simplification is itself, obviously, wrong in some sense, because the wrongness of our knowledge is definitely decreasing with time. Asimov expounds on this from the "flat-earth" position:
when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.
And further quantifies it in terms of the Earth's curvature and the expected deviation of the surface from the simplification, per mile. It's not a bad approach - now, how does one expand upon that to something more broadly definable, against the sort of statements we've seen on climate? More thoughts on that later...