Measuring Wrongness

One of the great puzzles I feel up against these days in several different contexts is finding a clear way to express how wrong certain expressed views are. This is not (at least usually not) an issue of moral wrongness, but in most cases just simple inconsistency of logic, disagreement with basic scientific understanding of issues, or perhaps abuse of the English language in ways that make no sense whatsoever. Last fall I spent an inordinate amount of time documenting the errors in an article by a climate-change "skeptic", but even then the simple count of the problems doesn't feel like it gives a true picture of the enormity of the misrepresentation of the facts provided by the article in question.

And we all make at least minor mistakes, so a simple count of errors could easily gravely over-state the problems in one piece compared to another - the example of the few errors in Al Gore's presentation, for instance, produced all sorts of glee on the "skeptic" side. Attempting to count errors does give at least a minimal picture of "wrongness" - Tim Lambert had quite a go of it with the recent Ian Plimer book (which perhaps has been the rage only in Australia, but still interesting to follow). But there's definitely something missing in that measure.

On the low end, there weren't actually so many errors in George Will's piece that caused all the trouble earlier this year - but they were of such an egregiously misleading nature in such a high-profile venue that Will's wrongness score surely should measure far higher than almost any of those hundreds-of-error-count pieces. How would one measure it in any sort of objective fashion, though?

Perhaps ratio of errors to words is one useful measure. One is reminded of the part in Dashiel Hammett's "Golden Horseshoe":

ONLY GENUINE PRE-WAR AMERICAN AND BRITISH WHISKEYS SERVED HERE
I was trying to count how many lies could be found in those nine words, and had reached four, with promise of more...

But there's more to it than error count too. Some errors are big, and some are in themselves small. What got me thinking more about the quantification problem recently was a pointer to a wonderful essay by Isaac Asimov - The Relativity of Wrong:

The young specialist in English Lit, having quoted me, went on to lecture me severely on the fact that in every century people have thought they understood the universe at last, and in every century they were proved to be wrong. It follows that the one thing we can say about our modern "knowledge" is that it is wrong.

but this simplification is itself, obviously, wrong in some sense, because the wrongness of our knowledge is definitely decreasing with time. Asimov expounds on this from the "flat-earth" position:

when people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

And further quantifies it in terms of the Earth's curvature and the expected deviation of the surface from the simplification, per mile. It's not a bad approach - now, how does one expand upon that to something more broadly definable, against the sort of statements we've seen on climate? More thoughts on that later...

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

With regards to measuring

With regards to measuring wrongness fussy logic strikes me as a useful tool but with regards to some errors being worse then other we can´t escape other issues like relevance, comunication and significance. These latter terms are more subjective and are dependent on the audience and context.

Perhaps it is a better idea to narrow the problem and measure rightness or wrongness with how well someone answers a given question. There is still subjectivity since the more knowledgeable the audience the more in depth the answer should be, however, at least relevance can be related to a question.

Now, we should explore fallacies. Generally argument by authority is considered a fallacy, however with out authorities it is impossible to rate the credibility of any evidence. The more fundamental a piece of evidence the more useful that I think argument by authority is and the more complex general and vague a statement or piece of evidence is the less useful I think argument by authority is.

There was one philosopher who once said every question begets another answer. One important question is an answer constructive or circular. Perhaps, how fundamental a piece of information is can be measured by the amount of things that it can be used to explain. If we can measure how fundamental a piece of information (conjecture, principle, statement, axiom theory) is then we can base the importance of augment by authority upon consistency with more fundamental principles (if they exist)

Another alternative is to simply look at the extent which logical rules are violated.

So if most people agree on A and B and
A and B imply C

then if any argument is based on appeal to authority then we can not appeal to authority solely to address the truth of C without addressing both the truth of A and the truth of B. Perhaps a definition of wrongness, might be based on the confidence we assign to both A, B and C. For instance if we assign a high confidence that C is false but we believe A and B are true, then the more confidence we have that A and B are true the greater the measure of wrongness should be weather we are wrong about A B or C.

This would give us a measure of wrongness for each argument but not with regards to an argument as a whole as one good argument can negate a lot of bad argument provided the audience is willing to sort though all the mistakes. However, if we have a measure of the rightness and wrongness of each argument we can still tally them all up and compare the amount of good arguments to the amount of bad augments. When doing this one must keep in mind that transitive arguments are always weaker because of error propagation and should contribute less to the total number of good arguments.

If I follow what you're

If I follow what you're arguing here, yes, I believe the wrongness of any given statement is dependent on the wrongness (or our confidence in) supporting statements. I think one interesting thing about scientific statements is that they rarely depend on only a single chain of logic A + B -> C, rather there are many separate lines of evidence for the most solid scientific concepts.

On the other hand, scientific theories are not derived by this kind of logic in the first place, but rather through an inductive process that is almost the reverse - you are looking for explanatory A's and B's for observational C's, and then our confidence in the theory (A or B) depends much more on the ability to predict new C's, rather than so much their explanation for old ones. Which adds a time dimension as well.

I recently read Popper's book on the logic of science and was planning to write up a bit of a review with my thoughts on the matter - it's definitely something I don't think we have a clear picture of right now and could use some new tools or analysis for the modern wired world...