Friday, March 29, 2013

Justice Flunks Math ... Or Not

Catching up on some online reading, I just chanced upon a New York Times op-ed piece titled "Justice Flunks Math". It deals with the Amanda Knox case. The authors' argument for their thesis (captured well by the title) centers around the following:
One of the major pieces of evidence was a knife collected from Mr. Sollecito’s apartment, which according to a forensic scientist contained a tiny trace of DNA from the victim. Even though the identification of the DNA sample with Ms. Kercher seemed clear, there was too little genetic material to obtain a fully reliable result — at least back in 2007.
By the time Ms. Knox’s appeal was decided in 2011, however, techniques had advanced sufficiently to make a retest of the knife possible, and the prosecution asked the judge to have one done. But he refused. His reasoning? If the scientific community recognizes that a test on so small a sample cannot establish identity beyond a reasonable doubt, he explained, then neither could a second test on an even smaller sample.
Whatever concerns the judge might have had regarding the reliability of DNA tests, he demonstrated a clear mathematical fallacy: assuming that repeating the test could tell us nothing about the reliability of the original results. In fact, doing a test twice and obtaining the same result would tell us something about the likely accuracy of the first result. Getting the same result after a third test would give yet more credence to the original finding.
Imagine, for example, that you toss a coin and it lands on heads 8 or 9 times out of 10. You might suspect that the coin is biased. Now, suppose you then toss it another 10 times and again get 8 or 9 heads. Wouldn’t that add a lot to your conviction that something’s wrong with the coin? It should.
My answer to the final  (rhetorical?) question is yes: my conviction that the coin was biased would increase, because the second test is plausibly independent of the first test. Whether that same reasoning applied to a retest of DNA evidence would depend on whether the retest would be probabilistically independent or, if not, how strongly the two test results would covary.

Suppose, hypothetically, that we have a test that is sometimes accurate, sometimes inaccurate, but infallibly produces the same result (right or wrong) on a given sample. No number of retests will improve the accuracy of the test.

So the use of the coin flip analogy is somewhat facile. (I can understand the temptation to use it, though. The authors were writing for a general audience, not the more mathematically sophisticated -- not to mention orders of magnitude smaller -- audience for this blog.) Retrials of the DNA test are likely to be neither independent nor identical, but somewhere in between. So a retest might add some information, but might well not alter our confidence in the original test enough to justify it. Bear in mind that retesting has both monetary and evidentiary expenses (it consumes portions of a finite, irreplaceable sample).

I'm inclined to believe that the second DNA test should have been done, not because a repeated test would necessarily raise confidence substantially, but because technology had "advanced" -- but only if there were expert testimony that the technological improvements justified consumption of more of the sample.

No comments:

Post a Comment

Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.