This reminds me of the case of Lucia de B., a nurse once suspected of killing her patients.
There never was solid evidence against her. She was a suspect because nearly everytime someone died in the hospital she was on her shift.
The initial chance of her being present when these supposed murders happened, and being innocent, was estimated at 1 in 7 billion. This number caused the police to focus their research completely on Lucia de B.
Eventually she was convicted for the murders based not on hard evidence or witnesses, but on statistics. The chance of innocence was put on 1 in 342 million.
The econometrist Aart de Vos was the first to notice that the initial Bayesian analysis was plain wrong. For example they had presumed the murderer had to be found amongst the nurses, other possibilities were neglected. They also hadn't corrected for combined P-values. He reduced the chance Lucia was innocent to 1 in a million.
The court said they had abandoned the statistical "proof", but remained of the view that it couldn't be a coincidence. Possible murder cases were chosen when they were negative for Lucia, other possible murder cases were left out of the equation. This further reduced the chance to 1 in 50.
Statisticians Richard Gill and Piet Groeneboom further reduced the chance to 1 in 9.
In my opinion statistics alone can never be adequate for conviction. It can muddy research and lead to confirmation bias. And the difference between a Bayesian chance of 1 in 7 billion and 1 in 9 is so big as to doubt the initial use of statistics even further.
"In my opinion statistics alone can never be adequate for conviction."
I'm not sure I understand what you suggest there, because, what else is there?
Let's say a murder conviction is based on 37 different security recordings clearly showing the defendant murdering the victim. The recordings make it a certainty that the defendant is guilty.
But what is "certainty"? It's an expression of probability, and that comes from statistics. Given the evidence, what are the odds that the defendant is not guilty? Extremely low. We have to come up with pretty crazy alternatives. The probability of the defendant being innocent in this case might be, let's say, 1 in 10^20.
We don't need to actually carry out this calculation, because we don't need the precise probability. It's clear that the probability is extremely low, so we can take a shortcut and refrain from figuring out exactly how low. But that shortcut is still ultimately an exercise in statistics.
People aren't supposed to be convicted on circumstantial evidence alone.
IANAL but I think this is very wrong.
Obviously the court will prefer strong forensic evidence if it can get it. But when it isn't, defendants can be, and are, convicted on circumstantial evidence, and this is not considered a miscarriage of justice.
If a security recording captures someone whose face can be clearly seen to be that of the defendant, and the recording shows them actually committing the crime, okay, that's pretty hard evidence. Anything short of that -- their face isn't shown clearly, perhaps, or the recording shows them being in the area but doesn't prove they actually committed the crime -- and you start having to talk about probabilities.
Even forensic evidence comes with probabilities attached. It's routine for fingerprint and DNA matches to be qualified with a probability of error. And often the true accuracy is overstated.
Assumptions. The devil is in the details. Throw too much math with too many assumptions at anyone but a panel of PhD's and you will get all kinds of misinterpretation, misunderstanding, and just plain wrongheadedness.
Certainly. My point is that, even if you don't want to discuss it, or use it, it all comes down to statistics and probability in the end. Get the most extreme proof you can imagine for somebody's guilt and that extreme certainty still comes down to probability in the end.
Noncircumstantial evidence is evidence which directly supports the point for which it is introduced, without any inference required.
Circumstantial evidence is evidence which requires a logical inference in order to support the point for which it is introduced.
That is roughly how evidence is explained by judges to juries in courtrooms.
For example, the victim's blood on the defendant's shoe is noncircumstantial evidence because it directly supports the point that the defendant was present in the vicinity of the victim's body. However, finding at the crime scene a brown fiber which is same type of fiber used to stich the defendant's very expensive, limited edition gloves, would be circumstantial evidence because you have to logically infer that the fiber found comes from the defendant's gloves. Alternatively, finding gloves that are identical to the defendant's gloves is also circumstantial evidence unless there is other evidence showing conclusively that those particular gloves are the defendants, i.e., footage of the defendant leaving the gloves there or something.
It all requires inference, though. The legal system just pretends that some inference comes "for free", and some doesn't. You never have "the victim's blood on the defendant's shoe", you have blood which you have reason to believe belongs to the victim on shoes you have reason to believe belong to the defendant. The only actual difference between your two examples is that of degree, i.e. the evidence linking the blood to the victim is much stronger than the evidence linking the fiber to the defendant.
I don't know. It seems like the defense can stipulate things that are nonetheless circumstantial evidence. Like, if the defendant acknowledges being in the area at the time, the prosecution no longer needs to deal much with evidence that establishes that, but that's still not "direct evidence" of the crime.
However, the main point of this thread of discussion I think was that even things that legally are considered "direct evidence" - say, testimony of an witness to the crime - do nonetheless involve some inference...
Your choice of examples actually supports the statistical nature of the distinction between circumstantial and noncircumstantial evidence. That the blood on the defendant's shoe came from the victim's body is based on a probabilistic test. That the blood on the defendant's shoe arrived through physical co-presence of the shoe and the blood's source body (whether or not it was the victim's) is also only "hard" evidence in the sense that other explanations are relatively improbable.
The difference between the probabilities involved in the above inferences, and the analogous inferences ineluctably underlying any hard evidence, and those involved in circumstantial evidence like your coat fiber, are purely quantitative. In some sense, hard evidence is that which can be accepted "beyond a reasonable doubt", while circumstantial evidence is that which does not pass such a test.
Unfortunately, "beyond a reasonable doubt" escapes precise definition. Yet, courts need to function, so we tacitly agree that sufficiently large quantitative differences are (legally) indistinguishable from qualitative differences.
(EDIT: I'm saying the same thing as mikeash, whose response appeared while I was writing my own)
You guys still aren't getting it; circumstantial evidence is not about the probability that the evidence is what it claims to be, it is about the necessary inference about what the evidence proves if the evidence is what it claims to be.
Noncircumstantial evidence directly supports the point for which it is introduced if the evidence is what it claims to be. The victim's blood on the defendant's shoe directly supports the point that the defendant was in the victim's presence when the victim was dying if it is established that the blood is the victim's blood.
Circumstantial evidence requires an additional inference beyond this step. The fibers are circumstantial because the fiber doesn't prove anything except that the someone wearing those gloves might have been at the crime scene at some point in time. You require the additional inference that the fiber was left at the scene at the time of the crime, which the fiber by itself does not prove but which can be "circumstantially" proved in combination with other evidence. For example, if the defendant, or some other witness, claims that the defendant was never at the scene of the crime at any point in time (specifically, but not limited to, periods prior to the crime), the presence of the rare fibers from his rare gloves is circumstantial evidence that the only time he was at the scene of the crime was when the crime occurred. In other words, the circumstances surrounding the fiber are the proof, not the fiber itself.
Your second paragraph ignores the inference required to go from victim's blood on defendant's shoe to defendant's shoe was in physical proximity to victim's body. It also ignores the inference required to go from blood is measured as matching victim's to blood came from victim's body. It also ignores the inference required to go from blood was deposited onto defendant's shoe directly by victim's body to defendant was wearing shoe at time of blood deposit and thus in presence of victim's still-bleeding corpse.
The (admittedly pedantic) point being made is that proving 'x' never directly supports anything other than the truth of 'x'. Moving from 'x' is true to 'y' is true always requires further inference.
Having something on a security recording can be classed as "hard evidence".
The case in question was one of using circumstantial evidence alone to "statistically" decide whether she was guilty or not.
And yet, aren't all types of hard evidence really just probabilistic as well? For example, there is a non-zero probability that the murderer on the security camera is merely a person with an amazing physical similarity to the defendant. If we could develop an accurate mathematical model of human physical features, we could even estimate the probability that we're looking at a double and not the defendant.
Likewise, when a DNA match is used in court, the forensic scientist typically testifies to the probability of a false match. Once again, the evidence is probabilistic.
In both of these cases, the probability of a mistake is low. But it is non-zero, and it can be quanitified--if not with current technology, then at least in principle.
So, you will never get away from deciding cases probabilistically. And if you simply refuse to assign numerical probabilities to anything even when the circumstances readily permit it, you're not increasing your chance of being right, you're just sticking your head in the sand.
Statistically means it's just an inductive argument for guilt, but so is being caught on camera. So I suspect that you just don't mean to use statistical at all.
The fact is there's no hard line between good and bad evidence, though I sincerely hope Bayesian equations are never the deciding factor in court cases. Math is great, but "probably guilty" just isn't good enough unless it's incredibly probable. As in, 99% probability seems to be a near certainty, but 75%? 75% is still a strong induction, but that's a 1/4 chance you're convicting an innocent person. So where do you draw the line?
Also remember that Bayesian Probability is still susceptible to bias extremely easy just by determination of how to define the variables. If I were looking to convince a jury, it would be fairly easy to tweak the variables and have a large degree of control over the resulting probability while appearing unbiased (because of Math!). So in this sense, I think that bayesian probability is pseudomathematics. Nobody's questioning the math, but the conclusions are not purely derived from math.
"Statistically means it's just an inductive argument for guilt, but so is being caught on camera. So I suspect that you just don't mean to use statistical at all."
I don't understand what you're saying. The fact that being caught on camera is an inductive argument for guilt is ultimately based on statistics or probability. What is the probability that the person on camera is the defendant?
Where do you draw the line? The American legal system, for criminal cases, draws it at "beyond a reasonable doubt", and provides no numerical guidance for what that means. But there is a line, and not wanting there to be one doesn't make it go away.
>Let's say a murder conviction is based on 37 different security recordings clearly showing the defendant murdering the victim. The recordings make it a certainty that the defendant is guilty.
This is the problem with this approach. Let's say the guilty party is someone who looks like the defendant. Now it doesn't matter how many security cameras there are, they all record what happened: Someone who looks like the defendant commits a murder on camera. The number of recordings doesn't inherently show whether it was the defendant or a double.
Or let's suppose that the evidence has been fabricated by someone trying to frame the defendant. Someone who fabricates one recording to the satisfaction of the jury can generally fabricate 37.
Or perhaps it was the defendant on the video but it wasn't a murder, it was a staged public death in cooperation with the alleged victim who subsequently went into hiding.
All of these things are individually relatively unlikely, but they don't become any more unlikely as a result of additional recordings.
This is one of the reasons why conviction rates are generally so high. Suppose (picking a number out of a hat, since we can't really know) that 60% of the accused who go to trial are actually guilty as a factual matter. But of the ones who aren't guilty, 0.6% are because a double committed the crime, 0.5% are cases where the defendant is being framed by a third party or the police, 0.7% are cases where the defendant could show a lack of guilt but doing so would endanger someone else and the defendant prefers to risk prison over endangering that person, etc., and when you add up all the individually low probabilities it turns out that at least one of them has happened in a quite substantial percentage of cases.
The trouble is that the defendant who goes to trial and claims to have been framed is not believed, because only 0.5% of cases are genuine instances of framed defendants (and juries intuitively understand that), and the prosecutor is too often allowed to infer from this that the remaining 99.5% of defendants are actually guilty, rather than that some quite large percentage of them are not guilty but for some other reasons, each of which seems about equally unlikely in any given case and is thus subject to the same misleading inference. But the alternative, to rigorously consider each of the individually not very likely alternatives, is extraordinarily expensive -- which is why statistics is given the short shrift.
At the end of the day the problem is that proving guilt to a level of certainty that would satisfy us if we were being rigorous about it would also cost too much and be too difficult for prosecutors to prove in most cases to satisfy our political desire to incarcerate "bad people" on a mass scale without breaking the bank any more than we already do. Rigorous, large scale, feasible budget. Pick two.
I have an MS in statistics, teach graduate statistics to econ PhD students (essentially; it's an 'econometrics' class); do research on statistical methods, etc.
I'm really sympathetic to your point. I would not want the guilt or innocence of myself, a family member, a friend, etc depend on the numeracy and statistical literacy of a jury, with the stats explained to them through lawyers (even my own!)
As to the OP, the key sentence for me is:
> ...the fire had been started by a discarded cigarette, ... the other two explanations were even more implausible
Only TWO other possible explanations for the cause of a fire!?!?! Regardless of the word choice, I think the appeals court has a better intuitive understanding of some of the pitfalls of model-based Bayesian statistics than the OP (namely, if you accidentally put zero weight on parts of the prior, you will necessarily put zero weight on those parts of the posterior).
edit: if I'm going to be listing credentials, I should point out that my econ phd is in econometrics; I don't want to give the impression that my MS in stats is a sufficient credential to teach econ phd students.
I do not think I understand your argument. In the case as you describe it the statistics are done incorrectly and cause a potential false conviction.
How is this different then an expert on autopsies incorrectly does the job which leads to a potential false conclusion.
In both case an expert made a mistake or did their job poorly leading to a potentially unfortunate outcome. Why should one be banned and not the other?
For a criminal trial, with a jury? People are hopeless at statistics.
Even on HN you'll find people making simple mistakes.
Someone confidant presenting a mathematical argument would be persuasive to a jury, even if their numbers are totally bogus.
> In both case an expert made a mistake or did their job poorly leading to a potentially unfortunate outcome. Why should one be banned and not the other?
People doing the job poorly should be banned. But the nurse case is interesting because poor statistics became self-fulfilling.
"We have some deaths, and this nurse is present when they happen, and so they are all suspicious", becomes "all the deaths that happen when this nurse is working are suspicious, and the deaths that don't happen when this nurse is working are not suspicious". And that sloppy thinking gets turned into "probability is 1 in a gajillion that these happen normally by chance."
>People doing the job poorly should be banned. But the nurse case is interesting because poor statistics became self-fulfilling.
>"We have some deaths, and this nurse is present when they happen, and so they are all suspicious", becomes "all the deaths that happen when this nurse is working are suspicious, and the deaths that don't happen when this nurse is working are not suspicious". And that sloppy thinking gets turned into "probability is 1 in a gajillion that these happen normally by chance."
Investigators have been known to be unduly influenced by pretty much any kind of evidence imaginable and then all further evidence is construed to support the original assumption/assertion. I need a more concrete difference or divider to promote other methods above statistics.
> Investigators have been known to be unduly influenced by pretty much any kind of evidence imaginable and then all further evidence is construed to support the original assumption/assertion.
A different way of phrasing this is that investigators have been known to incorrectly apply even standard tools that they've been explicitly trained in.
> I need a more concrete difference or divider to promote other methods above statistics.
If by "promote" you mean that we should try to increase the statistical literacy of lawyers, police, judges, and the general public, than hells yeah. But encouraging armchair statisticians to try to determine causality and guilt from badly-collected observational data that's analyzed by financially-invested parties...? In the highest stakes setting we have? Really?
The argument is mainly that it's (intuitively) much more common to make a significant error in statistics than autopsy.
Additionally I'd guess most times if you make a significant error during an autopsy, you are aware of it. Whereas it's much more difficult to know whether you made an error in your statistical analysis or not.
When I made that statement I was think about how there have been blood spatter experts, bit mark experts, bug forensics experts that have all been discovered to frauds and cast doubts on to large swaths of forensics techniques that were accepted in general(by the court and public not always other experts in the field).
That was impressions from remember articles where expertise in the above three fields was tested independently and experts were found to lack predictive power. I would have to search for that articles again if I wanted to provide a reference however I am working from memory.
I currently can not distinguish trusting them about stats vs about blood spatter, bit mark, DNA, evidence.
Why should I trust them less for stats? Math does hold a honorary position in society while also being often being disliked(USA here) I will have to think about weather that is enough to single stats out from the others.
I believe the point being made is that it's very difficult to be assured of accuracy when applying numerical estimations to events. There is no discernable difference between the correct answer and an incorrect one, which makes the use of this technique incredibly dangerous when determining the guilt of a person in the court of law.
> There is no discernable difference between the correct answer and an incorrect one,
For non-experts in forensics vs statistics can an average jury member make better decisions about expert level results in forensics compared to statistics/math? It is possible that statistics and math are just less common knowledge and most jury members would then be able to discerne finner differences in forensics then compared to statistics. If this is the case though there is nothing incorrect about the tool, statistis which seems to be implied in the original article, and it is just a lack of common understand that in theory if correct would allow for a new tool to be used.
> In my opinion statistics alone can never be adequate for conviction.
This is in an English court, and it is a civil case. Thus, there is no "conviction". For a criminal case the standard is "beyond reasonable doubt". For a civil case the standard is "balance of probability".
So, no-one is going to jail or getting a criminal record. But they might have to pay compensation to someone else for fire damage.
The phrase "...beyond a reasonable doubt" comes to mind. I would hope a jury involved in such a case would be more reasonable about actual hard evidence, but I hear that juries are selected for emotionality and not rationality. I would hope that the judge in such a case would be more reasonable and demand some actual evidence.
Such is the case in a fearful populace, and one that demands vengeance over justice. I appears as though something bad happened, so someone must pay and ... here's a magic forumla with missing variables that says you might have done it so ... GUILTY.
> I hear that juries are selected for emotionality and not rationality
The case the parent was talking about took place in the Netherlands. I don't think they even use juries in the Netherlands (though I could be wrong about that).
Even in the UK -- the jurisdiction the article is about -- juries are purely random. We don't do the American practice of lawyers from both sides being able to reject jurors for their own reasons (as opposed to an actual good reason like their knowing the defendant, and the judge makes the decision on that).
The Netherlands is it seems one of the few democracies that do not use juries for any trials. Civil or not. The UK certainly has judge only civil trials (commercial law is basically a way for two barristers and a judge to get paid for sitting and talking for two months)
Most democratic countries use civil law, actually, though I don't think that's particularly relevant. There's a much stronger correlation between language and system of law than there is between form of government and system of law. Note the similarity between [0] and [1], and the complete lack thereof between [1] and [2].
What common law countries call a "civil case" is not the same was what civil law countries call "a case". It's confusing.
As for the distribution of languages and laws, it's largely due to history. Civil/code jurisdictions show the extent of the former Roman empire and later the influence of Napoleonic Code. Common law countries are almost universally within the Anglosphere and have a common legal heritage commencing in the 1066 Norman Conquest.
Well, it seems to me that many countries do have some sort of a jury system but they use it rarely or not at all. Also, there may be laymen involved (for instance, Sweden and Finland may have two lay judges plus one professional one) but those don't really count as juries.
It's not the courts decision that's jarring, it's the argumentation.
If the court argued from human unreliability in enumerating options, the judgement would be sensible. But simply arguing from a mistaken understanding of probability makes it look silly.
Bad statistics shouldn't be considered adequate for anything. I have no problem with solid statistics being used assuming they have been properly scrutinized by experts before being used to pass judgement.
So many convictions come based on simple circumstantial evidence without even considering the statistical basis for those circumstances. Many convictions are entirely emotional bias on the part of the jury. Why should allowing statistical analysis to be considered as evidence be considered worse than the status quo?
The quotations in this article are taken out of context and presented incomplete. For example, the article quotes
The chances of something happening in the future may be expressed in terms of percentage. Epidemiological evidence may enable doctors to say that on average smokers increase their risk of lung cancer by X%. But you cannot properly say that there is a 25 per cent chance that something has happened: Hotson v East Berkshire Health Authority [1987] AC 750. Either it has or it has not.
And the judgement continues:
In deciding a question of past fact the court will, of course, give the answer which it believes is more likely to be (more probably) the right answer than the wrong answer, but it arrives at its conclusion by considering on an overall assessment of the evidence (i.e. on a preponderance of the evidence) whether the case for believing that the suggested event happened is more compelling than the case for not reaching that belief (which is not necessarily the same as believing positively that it did not happen).
Which is exactly the bayesian approach of 'probability as state of knowledge'.
The quotes are contradictory. The Bayesian approach agrees with the second and disagrees with the first.
I suppose you could weasel in some sort of consistency about the wording ... something like "You cannot properly say there is a 25 percent chance that something has happened, but you can properly say you believe there is a 25 percent chance that something has happened." OK, fine, but what's the difference? You still assign a percent probability to the chance that something happened, and you still rule on the basis of those beliefs. Or do you? (It's not clear to me from those quotes.)
Its post facto versus a priori. There is a 25% chance of something happening, but after the fact, it either did or did not happen. Averaged over many cases we can say that after the fact there is a 25% chance it happened, but criminal trials look at one specific instance, and its accurate to say that in an instance something either did or did not happen.
You should still use probability to quantify that belief. Whether it happened in the past or is going to happen in the future doesn't matter. If you throw a coin, and the coin is flying in the air, what is the probability that it lands heads? By your reasoning, the motion is already in place, and it's either going to land heads or it isn't. With a high speed camera and some physics we might even be able to predict which side it will fall. Even if we go back to a little before you threw the coin, the electrons in your brain are already in place and it's already determined which way you'll throw the coin. Would it be invalid to say that there is a 50% chance that it will land heads? So even when we are talking about the future, we use probability to quantify our belief. Or perhaps more accurately, we use it to quantify our lack of knowledge: in this case our lack of knowledge about how exactly the coin will move. Since we are already using probability not just for "fundamental randomness" but for quantifying lack of knowledge about the future, what's so special about the future vs the past? It makes total sense to apply the same reasoning to the past, even if we are talking about a specific instance.
You're shifting the argument. Sure you should, in some cases,[1] use probabilities to determine your belief about whether something happened or not. That's precisely what the court says in the second paragraph. Your beef was ostensibly with the wording of the first paragraph, but as I pointed out that when talking about things that have happened in the past, the wording is not incorrect.
[1] Of course you also shouldn't sometimes use probabilities. If we have statistics saying that witnesses lie 15% of the time, is it helpful to present expert testimony to the effect of "there is a 15% probability the witness is lying?" Such evidence is not useful and also impinges on the fundamental right of the jury to judge witness credibility.
My only point is that whether something is in the past or in the future has nothing to do with whether you can use probability. If you're consistent then either you don't use probability at all to model lack of knowledge, or you use it for both past and future. For example if you find it valid to say "when he tosses that coin, it has 50% probability to land on heads and a 50% probability to land on tails", then you should also find it valid to say "he tossed that coin, and it now has a 50% probability to be on heads and 50% probability to be on tails" if you did not yet see the coin. That's because in both cases, the outcome was already determined (by the laws of physics, disregarding quantum mechanics). The thing is that you just don't know which way it will land or has landed because you lack perfect knowledge.
I don't have a problem with not using probability to model lack of knowledge, but you have to apply that principle consistently (and whether it is ethically desirable to use probability is an entirely different and even more difficult question than whether or not it leads to accurate beliefs). That said, I do prefer to use probability to model lack of knowledge, because what is the alternative? Ad hoc gut feeling based reasoning?
I actually read it as not contradictory. This was about two events: one which was a priori more probable than the other, based purely on the previous experiences of the judge. That is, he considered "X probably happened because Y has a history of happening less than X". I take that to be that the judge committed a classic fallacy: he took a frequentist, limit-in-the-long-run subjective judgement, and tried to apply it to a situation where it's really not applicable. He took his priors from a mistaken frequentist consideration, that is what I interpret the first quote to say.
What he should have done, the second quote says, is to "arrive[s] at its conclusion by considering on an overall assessment of the evidence". Instead of considering what happened before, he should have taken his priors from the evidence. So, in the end, it's a sentence that supports a Bayesian methodology, in this case. Or maybe the Court of Appeal just was confused at the time, who knows.
You can say there is a 25% increase in chance of dying
if you smoke. You cannot say there is a 25% chance that
the nurse murdered those people - she either did or did
not. (what we lack is the knowledge of the past event)
second para:
On the preponderance of evidence, the court believes
beyond a reasonable doubt that the nurse *did* kill those
people. That event A *did* happen - we have used evidence
to fill in the gaps of our knowledge.
We are not assigning a probability to it, but we recognise
that life is all probabilities, but thats what reasonable
doubt it all about.
I think that those two quotes (modified as I understand them) are consistent. They are also not entirely Bayesian but I would prefer to be in a court of law than a court of probability.
Hmmm, I guess I agree. But I think a Bayesian would argue that this is an argument over wording not meaning. If we formulate beliefs and act on beliefs, then how we want to word it doesn't make a difference.
Sherlock Holmes was a detective, not a judge. What was appropriate for his line of work isn't appropriate in the courtroom. Courts are not about what MIGHT happen (or have happened); they are about what DID (or did not) happen. Probability can be an excellent guide for further investigation into these things, but it should be left to the investigators. If they can't find anything harder than a set of odds, then judges have no place betting on them.
Still, the argument "either A has happened or it has not happened" is silly. What matters is the knowledge we base our decisions on (ultimately we don't even know the outside world really exists, we only have a set of measurements and infer the existence of an outside world from that), and sometimes that is only partially certain.
> If they can't find anything harder than a set of odds, then judges have no place betting on them.
But this is how our entire legal system is set up. For criminal proceedings, we have to find the defendant 'guilty beyond reasonable doubt'. What is 'reasonable doubt' if not a (theoretically) quantifiable or parameterizeable representation of our belief that the defendant committed the criminal act?
We don't explicitly quantify this threshold as, eg. a 95% chance, but that's still the exact same process we require a jury to undergo, even if we don't attach quantifiable numbers to the results.
The jury can never return a probability of 1 (in statistical terms, 'almost certain'), or else the entire appeal system wouldn't exist.
I believe that the purpose of randomize juries could be said to exist because while there are parameters, no one person or entity knows all the parameters.
this seems like a complete over simplification and naive analysis of the situation - and the argument smells like a straw man and appeal to authority, comparing it to sherlock holmes - who incidentally /is obviously wrong/. the idea that 'whatever remains' is quantifiable and finite in the real world is a stark contrast with reality - and actually the fact that a probability is precisely 0 or 1 is very significant to its power in drawing conclusions. so not only does forbidding probability not forbid sherlock holmes style 'logic' but his logic is broken anyway in the vast majority of real world cases.
what the court says is common sense... its a shame it offends some academic view... oh, no, wait, it really isn't.
> this seems like a complete over simplification and naive analysis of the situation
The key phrase in the linked article:
> and so I must now tell them that the entire philosophy behind their course has been declared illegal in the Court of Appeal. I hope they don't mind.
Is what reveals the misunderstanding. Judges don't "declare" things illegal, they rule on matters of what the law is. That the law requires judgements to be rendered in certain terms is not a statement of the legality of making Bayesian arguments.
The law requires a decision. The conceptual basis of the common law is a promise that a court will always render a clear and specific set of rulings or orders, and that there will always be an explanation of those rulings or orders. Judges are not free to say to disputants that they are X% likely to win their claim.
This is one of those places where Bayesian probability isn't a good fit. Fuzzy logic -- so passé these days -- at least has an understood mechanism for "defuzzifying" in a fashion which lawyers would find quite familiar.
It also overlooks that judges have many other places to introduce flexibility and weighed judgement. For example, judges may assign blame in portions for some crimes or torts; they may reject, moderate or modify some claims in equity; they have leeway to combine multiple considerations and legislative constraints in handing down criminal sentences and so on.
One thing that bugs me tremendously about outsiders looking in at law is the assumption that, since lawyers don't immediately and entirely embrace new idea X, they are fusty old fools who are an impediment to the good. It's an argument born of ignorance that lawyers are deliberately obtuse fools, or judges out-of-touch theoreticians. Lawyers and judges touch on more problem domains in more depth, with greater consequences, than pretty much every academic and every software developer.
> Edwards-Stuart had essentially concluded that the fire had been started by a discarded cigarette, even though this seemed an unlikely event in itself, because the other two explanations were even more implausible.
Anyone who believes that there are only three possible causes of a fire (not just three likely causes, but three causes with nonzero probability) does not understand fires. Changing "zero" to "epsilon" is going to radically change the Bayesian estimator and vastly decrease the probability that it was a discarded cigarette. Since no one in the case seems to have defined "epsilon", I'd throw out a naive Bayesian analysis too.
The author should (and I assume does, after looking at his CV), understand these modeling issues, so... I'm disappointed. The statistical analysis seems as bad as the legal analysis.
>> and so I must now tell them that the entire philosophy behind their course has been declared illegal in the Court of Appeal. I hope they don't mind.
>Is what reveals the misunderstanding. Judges don't "declare" things illegal, they rule on matters of what the law is. That the law requires judgements to be rendered in certain terms is not a statement of the legality of making Bayesian arguments.
The write seems to be assuming that the court case sets a president that make it hard if not impossible to use Bayesian reasoning in the court room and sums it up as "illegal".
> The law requires a decision. The conceptual basis of the common law is a promise that a court will always render a clear and specific set of rulings or orders, and that there will always be an explanation of those rulings or orders.
I thought this was a civil case and did not require "beyond a shadow of doubt." but rather just high probability a term I have heard is that it requires a preponderance of evidence. Bayesian statistics/reasoning is supposed to help come to a conclusion with a high likely hood of being correct which seems like the file state I want a judge to be in when ruling. What is not good or counter productive about this state brought about by Bayesian methods?
The problem is that the law requires a statement of cause in order to apportion responsibility. It cannot abide uncertainty at the moment of judgement. It is accepted that, because in civil cases, the consequences of misjudgement are less severe, it is acceptable to relax the standard of judgement to "balance of probabilities".
Note the case that was mentioned inline that this case was affirming: Hotson v East Berkshire Area Health Authority[1]. First of all, the linked poster should be railing at that decision, this one merely takes it as precedent.
In Hotson there was a question about causality based on the given probability of a child's recovery. Given that a child that fell out of a tree was estimated to be 25% likely to recover, medical staff seemed to have taken the view that the child was basically a hopeless case. Perhaps if they hadn't, went the reasoning, the child might have done better because the staff would have tried harder. In that were so, some responsibility could be apportioned to the Health Authority, and not just to the child falling out of a tree.
The Lords ruled that the only thing that certainly happened was the child falling out of the tree. Counterfactuals based on probabilities couldn't be admitted because there's no way to reliably nail the damn things down. Anybody can come along with a different Bayesian network and give you a different estimate. What mattered, in the view of the Lords, was what could be verifiably said to have actually happened.
A more-than-standard disclaimer:
I am not a lawyer. I was terrible at torts. Torts law is notorious tricky and can vary very widely from country to country -- I studied in Australia and these cases are in England. This post does not constitute legal advice. Hell, it doesn't even make for good reading.
> What mattered, in the view of the Lords, was what could be verifiably said to have actually happened.
It would be interesting to see how "verifiably" and "actually happened" would be refined to produce a judgement. I could imagine at least refinement towards frequentist statistics, Bayesian methods, "I know it when I see it"(gut feeling, other non-quantitative standard of obviousness).
I would say that main things to remember about law in common law jurisdictions are:
1. The judge does not seek evidence, does not provide evidence, and does not provide arguments. The presentation of fact and law is the job of the plaintiff/respondent or prosecutor/defendant (depending on civil or criminal law).
2. The judge rules on what is introduced during court cases. Judges almost always follow judgements that have been made previously -- precedent, also called stare decisis or "let the decision stand". In this case the precedent identified was Hotson. A lower court judge who presumes to modify a higher court's principles can expect that the higher court will accept an appeal on matters of law.
3. The legal system's purpose is to render certain, verifiable judgements. Judgements can't be uncertain ("You are sentenced to a 95% chance of prison" or "between 20,000 and 50,000 is awarded to the plaintiff for damages" is not helpful), so as a step in the process of legal reasoning, information about uncertainty is deliberately destroyed. This is called "balance of evidence" or "preponderance of evidence" in civil trials, or "beyond reasonable doubt" in criminal trials. It is deliberately imprecise, because all attempts to introduce mechanical rules for evidentiary judgement have so far proved to be even more gameable than the fuzzy statements.
The legal system is profoundly introspective. Lawyers will introduce matters of legal interpretation in argument, these must be addressed. Appeals to higher courts are usually only possible on questions of law, meaning that very fine details of legal reasoning are constantly being teased out.
Judges must constantly weigh what the law is, and they also at higher levels frequently decide how the law is determined to be what it is. Given that subjects like jurisprudence and interpretation of laws can have massive practical consequences, it's an area that receives great scrutiny.
I became interested in the methods of analysis and judgement because of Australian constitutional law, where there are -- depending how you count -- between 4 and 6 basic schools of thought on how to interpret the Australian Constitution.
and the argument smells like a straw man and appeal to authority, comparing it to sherlock holmes
A humorous reference to literature is hardly an appeal to authority.
who incidentally /is obviously wrong/
He's not /obviously wrong/, he's subtly wrong in not explicitly acknowledging that the enumeration of options, not evaluation of their probability, is usually the difficult part. He's obviously right if you assume all possibilities are enumerated.
actually the fact that a probability is precisely 0 or 1 is very significant to its power in drawing conclusions
0 and 1 are not probabilities, and while certainty has power in drawing conclusions, it only really occurs in purely theoretical discussions.
what the court says is common sense...
Whether it's common sense or not does not make it valid.
oh, no, wait, it really isn't
Is that about it being common sense, or it offending academic views?
"I teach the Bayesian approach to post-graduate students attending my 'Applied Bayesian Statistics' course at Cambridge, and so I must now tell them that the entire philosophy behind their course has been declared illegal in the Court of Appeal. I hope they don't mind."
Classic. I was told (at same uni) that, in a case involving DNA evidence, I could ask to be dismissed from a Jury, citing training in Bayesian statistics (which was prohibited because reversing the scientific certainty to look at false positives, in cases with weak circumstantial evidence, essentially kills DNA evidence's usefulness. If someone was in the right city, you expect hundreds of DNA matches out of millions of people, but if you can track them to the right street at the right time, you would expect 0.001 matches, and thus the evidence points to them much more strongly.)
Everything you hear about law that was not conveyed to you by a lawyer, a law lecturer or a judge is probably horseshit.
People love to think they know some obscure wrinkle, some cool loophole, some nifty curio about the law. But so many of the stories you hear are just stories.
ps. a better way to get disqualified is to have studied law.
The quote is stupid. Nobody declared anything illegal. They put boundaries on what evidence can be presented to juries, which is a key function of judges.
In the American rules of evidence, evidence is only admissible if its probative value outweighs its prejudicial effect. That's why, for example, judges ban information, generally, of past crimes. While criminals tend to be more likely to commit another crime, the jury gives such evidence weight beyond its actual value. The same can be true of expert statistical testimony.
Cops find criminal's blood at crime scene. Cops track down murder suspect using security video. Find suspect. Take DNA sample. Sample matches DNA at crime scene. DNA proves guilt beyond reasonable doubt 1:1000000.
Example DNA not useful:
Cops find suspect's blood at crime scene. Cops peek in DNA database and find 3 people in the region with matching DNA. One of them sort of looks like the shadow in the security video. They arrest him, get him picked out of a lineup and say DNA matches blood at the crime scene in the trial. Cops say DNA evidence proves guilt with odds 1:1000000, but odds of innocence are the complete opposite and closer to, say, 1000:1.
gizmo says what I meant, indeed. In the UK, the police have been keeping a DNA database of everyone ever arrested for anything; they run samples through it from crime scenes. The ECHR has (I think) ruled that this is illegal, because of this sort of error, but the Home Office refuses to listen to European Courts, or indeed mathematics, when it comes to crime.
There it was held that an expert witness shouldn't use Bayesian reasoning to calculate probabilities to tell a jury "outside the field of DNA (and possibly other areas where there is a firm statistical base)".
I think that, by a 'firm statistical base', the court's getting at the sort of situations where the right prior is a widely agreed on (such as DNA, where the size of the DNA database is known), so there won't be much opportunity for different expert witnesses to disagree on probabilities due to having different priors. (N.B. IANAL)
This is still nonsense, IMHO. I can understand the court not wanting juries to be overly swayed by a spuriously precise probability that might have been very different if a different expert had been chosen. But there's no justification for restricting the statistical methods that the expert uses to reach their conclusion, however it's expressed to the jury.
Interestingly, that case shared one of the appeal judges from the case in TFA (Lord Justice Beatson, then Mr Justice Beatson).
I think the distinction being made here is the difference from using statistical methods for determining overall guilt versus using statistical methods for determining specific facts about the case.
We use statistics to say that someone was or was not at the scene of a crime
(DNA testing doesn't 100% guarantee that a persons' blood is actually a match), for example. We don't make the leap that since they were there, "odds are" they did it, however.
The question in R v T that Bayes theorem was being applied to was a specific, factual one: whether a footprint could have been made by a particular shoe.
> * In upholding the first instance decision, the Court of Appeal reiterated the principle in cases where there are competing explanations for a particular loss that causation cannot be established only by a process of elimination such that the 'least unlikely' cause of a loss is identified. A claimant must demonstrate that the particular version of events that they rely upon is more likely to have happened than not, in order for the civil burden of proof to be satisfied.*
I'm not sure what the problem is.
"It could have been A, B, or C. It's really unlikely to have been A or B, and thus it must be C" is obviously flawed, because for all we know it could have been D, and even if it was C you need to show (on the balance of probabilities) that C is the cause. Not just that C is more likely than A or B.
It's not the judge's role to independently nominate D. It's up to the lawyers for the plaintiff and the respondent to present facts and make legal arguments.
The judge's role is to weigh those legal arguments, and in (most) civil trials to weigh the facts presented on the balance of probability, and to render a decision.
If you require judges to run a Bayesian network over the entire universe for each case, the legal system will get a wee bit slower.
It's not the judge's role to independently nominate D.
But that is exactly why the judge can not assume all possibilities were enumerated, and requires positive evidence for C, rather than negative evidence for A and B.
Judges, oddly enough, decide cases according to the rules of legal reasoning. They cannot and won't introduce their own evidence. A judge that did so would be rightly removed from the bench.
So you agree then? I did not say a judge should propose any evidence.
Perhaps it's easiest to explain with an example.
Let's say the defence argues for A, and prosecution argues for B. The prosecution shows that the probability of A being true is 1%. The judge can not take that as a B having a probability of 99%, since it may be that C is true.
Thus the judge requires prosecution to show that B is 50%+ likely, not that A is unlikely.
The ruling is not concerned with statistics as a mathematical discipline. The "balance of probabilities" is a legal term.
Please read the court ruling, and then read the blogger's conclusion (summed up sarcastically in his last sentence) and tell me if you think he's right :-)
So is "reasonable doubt", but I am yet to see a perfect definition of it. Indeed, in the UK a jury questioned its meaning, the judge got snottily arrogant over it and ditched the case, but still not proper definition, either from the critical judge or anywhere else in the legal system. Interesting to me that the judge was so unable to answer the question, that he ditched a whole trial.
I am amused by legal arrogance that thinks every one else should fully understand all its weirdness and definitions. Especially having done some work in a solicitors office where I was the arrogant one getting irritated with most of the legal people not having even a basic understand of the computers on their desks...
> in the UK a jury questioned its meaning, the judge got snottily arrogant over it and ditched the case ... Interesting to me that the judge was so unable to answer the question, that he ditched a whole trial.
If you're talking about Huhne & Pryce, that's just wrong. The jury was discharged because after 2 days of deliberations they told the judge that they couldn't come to a verdict (which 10 of them agreed on). http://en.wikipedia.org/wiki/Hung_jury .
I don't know where you got the idea that the judge got defensive because he didn't know how to answer what reasonable doubt meant, but it's utter nonsense. It's a very common question for juries to ask. The appeal courts have debated several times what the best guidance is. The judge was following the current guidance, which is tell the jury to treat it as ordinary english words and refuse to give any more specific definition. If each judge gave their own favourite interpretation of what 'beyond reasonable doubt' means (e.g. p(false positive) < 5%, or whatever), then the standard of proof you'd be tried under would depend on the judge, which would be obviously unacceptable.
The problem probably lies in the fact that you cannot prove something by reduction to the absurd based on a probability distribution.
Because law requires positive proof (or it should) not just "this chain of events is so unlikely under other assumptions that our assumptions must be right". This is where Sherlock's quotation is misleading: he says "rule out the IMPOSSIBLE", not the improbable.
The judge is right epistemologically. Bayesian statistics has only a terminological issue.
The whole basis of law is elimination of "epistemic uncertainty" and threshold of evidence for an event. If this is not possible, the case is not proved, therefore case dimissed.
Sherlock Holme's statement is a statement of method, not statement of proof.
In other words, the law is far greater than Bayesian probability or any method, including scientific, since the burden of proof is from the evidence with the primary legal threshold of "innocent until proven guilty".
How many times in this comment page do people need to be reminded that the burden of proof in civil cases is only 51%, not "beyond reasonable doubt" like criminal cases.
Poker players understand that a lack of knowledge of past events is exactly the same as uncertainty about future events.
For example, if the first round of hold em has dealt two cards to each of 9 players, there are 18 cards out and you only know the state of two of them. The odds that the flop will contain specific cards that help your hand are calculated against the total number of unknown cards, regardless if the unknowns are held by other players or still in the deck.
For example, you get a pair of 3s. There are two more threes in the deck, and it is pretty likely that if the flop contains one of them that you will have the best hand at the table at that point. For the first card turned up at the flop, there is a 2/50 chance it will be a three, the next is 2/49, the next 2/48. It doesn't matter how many face-down cards have been dealt to your opponents.
All the unknowns are still in the pool of possibilities, and it is no different when you assess the evidence you have of any other type of event that has already happen. Each bit of evidence means what means, and the unknowns contribute to the pool of uncertainty.
I not sure what the OP expects the court of appeal to do. They are tasked with determining if there is cause for appeal; in this specific case whether the original judge erred in law. They determined that he did not. They aren't saying whether the judgement was right or wrong, rather that the judgement was arrived at in a lawful manner.
For anyone curious about how this would play out in the US, federal courts and most state courts use the Daubert standard. TLDR: experts (including statisticians) can testify if they're using fairly standard methods and there are no significant gaps in the evidence-->analysis-->testimony chain.
Accidents, murders, fires, etc... are all unlikely events by nature. Judging whether one of them can have occurred based on probability does not prove anything. The author post is wrong is the assumption that "an unlikely event"="impossible". Probability does NOT ascertain certitude. There is always an expression of confidence which is not equal to 100%, and therefore it seems logical that a court does not take the probability of an event as a tangible proof of what did or did not happen. Probability != science.
90% of the work of a jury trial is managing human reactions to things. There are things that judges need to know about statistics. E.g. That in modern DNA forensics, lab error totally dominates the probability of a coincidental match, limiting accuracy to 1-2%. On the other hand, they must be deeply cognizant of human responses. A mathematician, told that some test shows a person 1,000 times more likely than average to be the killer, will intuitively realize that the person is still unlikely to be the killer. A jury of ordinary people won't. We have judges precisely to mediate between these two domains.
Well, "everything humans do" includes creating the law, so by that definition, the law is highly incoherent.
And a close reading of the law reveals this. In fact, it is often by close readings of the law (and precedent and every other historical quirk) on which controversial and contested decisions are rendered
Mathematicians are people who define probability as the measure of some well-defined subset of some well-defined set of all events.
The people who use probability theory to build approximate models of real world events are called statisticians (and those who forget about "approximate" - applied statisticians).
In common law countries there are three main tributaries of law: legislation, the common law, and equity. Depending on where you are, the latter two are sometimes "fused" into a single type of law.
"Law makers" -- I presume you mean legislators -- only control legislation. This is often called "positive law", in that somebody has to take positive action to "make" it where it didn't exist before.
By convention, the common law is considered to be "discovered". It exists independent of men and women, and the role of judges is to determine what it is.
In practice there is a dynamic tension between the evolution of the common law as it finds and adapts to new problems, and the intervention of the legislature when it decides that the law should be something else. Legislation itself quickly accretes case law that decides what the given piece of legislation means.
The opening Holmes quote is being used improperly. Holmes used that as opening, then he went on to find further evidence to support that conclusion. He did not rely on logic alone to determine the culprit, but rather went on to find hard evidence.
There never was solid evidence against her. She was a suspect because nearly everytime someone died in the hospital she was on her shift.
The initial chance of her being present when these supposed murders happened, and being innocent, was estimated at 1 in 7 billion. This number caused the police to focus their research completely on Lucia de B.
Eventually she was convicted for the murders based not on hard evidence or witnesses, but on statistics. The chance of innocence was put on 1 in 342 million.
The econometrist Aart de Vos was the first to notice that the initial Bayesian analysis was plain wrong. For example they had presumed the murderer had to be found amongst the nurses, other possibilities were neglected. They also hadn't corrected for combined P-values. He reduced the chance Lucia was innocent to 1 in a million.
The court said they had abandoned the statistical "proof", but remained of the view that it couldn't be a coincidence. Possible murder cases were chosen when they were negative for Lucia, other possible murder cases were left out of the equation. This further reduced the chance to 1 in 50.
Statisticians Richard Gill and Piet Groeneboom further reduced the chance to 1 in 9.
In my opinion statistics alone can never be adequate for conviction. It can muddy research and lead to confirmation bias. And the difference between a Bayesian chance of 1 in 7 billion and 1 in 9 is so big as to doubt the initial use of statistics even further.
http://en.wikipedia.org/wiki/Lucia_de_Berk#Statistical_argum...