Sunday 22 May 2011

Welcome to Maths on Trial!

This is a blog about mathematics. Not the mathematics taught in schools or universities, the maths where you have to solve problems and take tests. The maths we want to talk about are the maths in our lives: the maths that are used constantly in newspapers, finance, assessment, design, health, criminology and a million other places.

The articles in this blog will be devoted to the mathematics being used and misused, visibly or invisibly, in current affairs, particularly court cases. We hope to eventually invite guest writers to contribute articles. We'll also write about maths education in general; it could hardly be more relevant, given that it’s the failings in mathematics education that lead to the errors and misuses of maths in public affairs – and to the fact that they can take so long to be pointed out and recognized!

For this first post, we’d like to introduce ourselves and our forthcoming book, Maths on Trial.

Leila: My name is Leila Schneps, I am a professional research mathematician, and I live in Paris, France. I have four children, and apart from my work, which I’m crazy about, I love classical music, unusual tidbits from history, and crime fiction, especially British cozies. I also have an irresistible tendency to follow certain international court cases (such as the Amanda Knox trial in Italy, and now the DSK disaster in New York) in exhausting detail. I spend most of my spare time writing; I’ve published mysteries and puzzle solving books under a pen name, not to mention my newest book co-authored with my daughter Coralie. My dream vacations are just about anywhere on the Mediterranean, with Greek islands preferred.


Coralie: My name is Coralie Colmez, I studied mathematics in Caius college at Cambridge, and graduated with a first in June 2009. I worked for a year as assistant to Carol Vorderman, and researcher on the report she wrote with the Maths Taskforce, about maths education in England. I live in London where I tutor children of all ages in maths, and spend the rest of my time giving my oven a workout, to the delight of my flatmates. I travel as much as I can, play the violin (occasionally) and read classic masterpieces, romance novels, and everything in between.


Our book: Maths on Trial covers ten criminal cases in each of which, at a crucial point, a mathematical mistake played a significant role. The cases cover more than a century, half a dozen countries, and several ways in which mathematics is used in criminology - be it DNA analysis, database trawling, proving discrimination, making handwriting comparisons, or calculating probability of guilt.

Maths on Trial is not published yet, but it is in the capable hands of our wonderful agent, Andrew Lownie, and we hope for good news soon!

13 comments:

  1. Hi Leila,

    There has been some discussion in the mathematical literature about the use of partial DNA profiles to solve cold cases. As databases become larger, the possibility of coincidental matches rises. Will your book cover this topic?

    ReplyDelete
  2. Hello Chris.
    Yes, we have a chapter devoted to that exact subject, covering the case of Diana Sylvester (which you may know). A partial DNA profile from a 30-year old rape and murder found a single match in a database of over 330,000 registered sex offenders. The question arose at trial as to what the probability was that the person in the database whose DNA matched the criminal's really was the criminal. It's not an easy question and to our mind, both sides made errors in interpreting the numbers. We explain these and work out our own answer.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. The answer to the removed comment is this: we will cover a logical/mathematical error made by the appeal judge in the Meredith Kercher murder case. The purpose of our book is strictly not to make any assumptions of guilt or innocence in any of the cases, but simply to seek out and explain instances of false mathematical reasoning.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
  5. Good luck with your book. I hope it is successful and I look forward to reading it.

    ReplyDelete
  6. I agree with you that mathematics is often misused in criminal cases. I learned in law school that the California Supreme Court overturned a case based upon the "unlikelihood" of finding a black man, a blonde woman, driving a convertible at a particular time and place.

    However, I did not see much (any) mathematics in your NYT article. Regarding the first test, you claim: " Even though the identification of the DNA sample with Ms. Kercher seemed clear, there was too little genetic material to obtain a fully reliable result — at least back in 2007." So were the results "clear" or "not fully reliable?" You can't have it both ways.

    This seems to be a clear case of garbage in, garbage out. No matter how many times you repeat an unreliable test, the results are unreliable. Your analogy to coin tosses seems trivial. In any event, according to Wolfgram Alpha the chances of getting 16 heads out of 20 tosses is about one in 214. Many juries would not find that convincing "beyond a reasonable doubt."

    I could go on – prosecutors are not known for discarding incriminating evidence lightly. The evaluation of DNA evidence given by experts is arcane and tedious. It is not really amenable to a couple hundred words in a New York Times op-ed piece.

    I look forward to reading your book.

    ReplyDelete
  7. The case you refer to is analyzed in detail in our book.

    As to the DNA result being or not being "fully reliable", it looks fully reliable on sight. However, the consensus within the scientific community is that until technology improves further, something which is already happening, analyses of very small quantities of DNA must be considered as less reliable than analyses of larger quantities, even if the DNA profile from the small sample is fully readable. The judge in the Kercher case himself writes that the result is acceptable for purposes of orientation, but not "beyond a reasonable doubt".

    As for your point about the chances of getting 16 heads in 20 tosses, it is about 1 in 214, but that is not the point. The point is that if you know that you have a coin that has a 50-50 chance of being fair or biased, and you throw 8 heads in 10 tosses, you can assess the probability of its being biased as X%. If you then again throw 8 heads in 10 tosses, this test will again tell you that your coin is biased with a reliability of X%. But if you combine the results of the two tests, you can use the two together to assess that your coin is biased with a higher probability than X%. So if you have any test whose result is reliable with a probability of X%, if you repeat the test and get the same result,it will give you a better assessment of reliability. No matter what your reliability was to start with. There is simply no good reason to reject a second test.

    You treat unreliability as though it is a yes-or-no fact, but it isn't. A result can be reliable with a probability of 60%, 80%, 98%, 100% or whatever, and this matters when assessing the importance of your evidence.

    As for your comment that prosecutors are not known for discarding incriminating evidence lightly, please note that prosecutors in this case requested a second test: the request was rejected by the judge, who can be assumed not to have a clear understanding of all the scientific and probabilistic issues involved. In this he is certainly not alone amongst judges. I hope the NYT article demonstrates that a better introduction to statistical and probabilistic fallacies should be an essential part of law school education.

    ReplyDelete
  8. One little addendum to the example Leila gave, related to the well-known concept of test-retest reliability: a test may be imperfect, but it may give the same result FOR YOU every time. For example, suppose you do not have some disease, but you do have a marker for it; and further suppose the test detects this marker flawlessly (this isn't as far-fetched as it might seem: the marker may be visual inspection for the presence of large skin lesions to indicate melanoma).

    The point is that the test will be imperfect, yet repeating the *same* test will not give new information. This is one of the main uses of Bayesian reasoning: to ask "how do we UPDATE our probabilities in the presence of new information?" The key issue is how that "new" information is correlated with extant information. In the case of flipping coins, where each coin flip is considered roughly independent, Leila's conclusion is exactly correct. But, with some medical tests, the "new" information may merely perfectly reliably replicate the results of a prior test. This is why, for AIDS, there used to be a sequence of Western Blot and ELISA, whose ERRORS are roughly independent, which is close to the central issue.

    As I said, very minor... but a point that can easily be lost when "tests" are merely replicated and probabilities calculated assuming serial independence.

    ReplyDelete
  9. Hi Leila, Coralie -

    Congratulations on the book. I look forward to reading it.

    I'm a bit sad that I will not be able to see the lecture at Conway Hall this weekend. Will there be any other London talks coming up?

    Also, having just found your blog, could you direct me to your favourite references/posts about the education of mathematics? This is a subject on which I often ponder and an area where I hope eventually to act!

    Many thanks,

    Ben


    ReplyDelete
  10. Dear Ben,
    We're not experts in math education (either for the young, or for the general public), but we did a podcast in a series called "Inspired by Math" with Sol Lederman that is full of very varied and original different topics. As far as communicating knowledge to the general public is concerned, I really enjoy his type of initiative.
    Best regards
    Leila

    ReplyDelete