This reminds me of the case of Lucia de B., a nurse once suspected of killing her patients.
There never was solid evidence against her. She was a suspect because nearly everytime someone died in the hospital she was on her shift.
The initial chance of her being present when these supposed murders happened, and being innocent, was estimated at 1 in 7 billion. This number caused the police to focus their research completely on Lucia de B.
Eventually she was convicted for the murders based not on hard evidence or witnesses, but on statistics. The chance of innocence was put on 1 in 342 million.
The econometrist Aart de Vos was the first to notice that the initial Bayesian analysis was plain wrong. For example they had presumed the murderer had to be found amongst the nurses, other possibilities were neglected. They also hadn't corrected for combined P-values. He reduced the chance Lucia was innocent to 1 in a million.
The court said they had abandoned the statistical "proof", but remained of the view that it couldn't be a coincidence. Possible murder cases were chosen when they were negative for Lucia, other possible murder cases were left out of the equation. This further reduced the chance to 1 in 50.
Statisticians Richard Gill and Piet Groeneboom further reduced the chance to 1 in 9.
In my opinion statistics alone can never be adequate for conviction. It can muddy research and lead to confirmation bias. And the difference between a Bayesian chance of 1 in 7 billion and 1 in 9 is so big as to doubt the initial use of statistics even further.
I have an MS in statistics, teach graduate statistics to econ PhD students (essentially; it's an 'econometrics' class); do research on statistical methods, etc.
I'm really sympathetic to your point. I would not want the guilt or innocence of myself, a family member, a friend, etc depend on the numeracy and statistical literacy of a jury, with the stats explained to them through lawyers (even my own!)
As to the OP, the key sentence for me is:
> ...the fire had been started by a discarded cigarette, ... the other two explanations were even more implausible
Only TWO other possible explanations for the cause of a fire!?!?! Regardless of the word choice, I think the appeals court has a better intuitive understanding of some of the pitfalls of model-based Bayesian statistics than the OP (namely, if you accidentally put zero weight on parts of the prior, you will necessarily put zero weight on those parts of the posterior).
edit: if I'm going to be listing credentials, I should point out that my econ phd is in econometrics; I don't want to give the impression that my MS in stats is a sufficient credential to teach econ phd students.
I do not think I understand your argument. In the case as you describe it the statistics are done incorrectly and cause a potential false conviction.
How is this different then an expert on autopsies incorrectly does the job which leads to a potential false conclusion.
In both case an expert made a mistake or did their job poorly leading to a potentially unfortunate outcome. Why should one be banned and not the other?
> In my opinion statistics alone can never be adequate for conviction.
This is in an English court, and it is a civil case. Thus, there is no "conviction". For a criminal case the standard is "beyond reasonable doubt". For a civil case the standard is "balance of probability".
So, no-one is going to jail or getting a criminal record. But they might have to pay compensation to someone else for fire damage.
The phrase "...beyond a reasonable doubt" comes to mind. I would hope a jury involved in such a case would be more reasonable about actual hard evidence, but I hear that juries are selected for emotionality and not rationality. I would hope that the judge in such a case would be more reasonable and demand some actual evidence.
Such is the case in a fearful populace, and one that demands vengeance over justice. I appears as though something bad happened, so someone must pay and ... here's a magic forumla with missing variables that says you might have done it so ... GUILTY.
"In my opinion statistics alone can never be adequate for conviction."
I'm not sure I understand what you suggest there, because, what else is there?
Let's say a murder conviction is based on 37 different security recordings clearly showing the defendant murdering the victim. The recordings make it a certainty that the defendant is guilty.
But what is "certainty"? It's an expression of probability, and that comes from statistics. Given the evidence, what are the odds that the defendant is not guilty? Extremely low. We have to come up with pretty crazy alternatives. The probability of the defendant being innocent in this case might be, let's say, 1 in 10^20.
We don't need to actually carry out this calculation, because we don't need the precise probability. It's clear that the probability is extremely low, so we can take a shortcut and refrain from figuring out exactly how low. But that shortcut is still ultimately an exercise in statistics.
It's not the courts decision that's jarring, it's the argumentation.
If the court argued from human unreliability in enumerating options, the judgement would be sensible. But simply arguing from a mistaken understanding of probability makes it look silly.
Bad statistics shouldn't be considered adequate for anything. I have no problem with solid statistics being used assuming they have been properly scrutinized by experts before being used to pass judgement.
So many convictions come based on simple circumstantial evidence without even considering the statistical basis for those circumstances. Many convictions are entirely emotional bias on the part of the jury. Why should allowing statistical analysis to be considered as evidence be considered worse than the status quo?
The quotations in this article are taken out of context and presented incomplete. For example, the article quotes
The chances of something happening in the future may be expressed in terms of percentage. Epidemiological evidence may enable doctors to say that on average smokers increase their risk of lung cancer by X%. But you cannot properly say that there is a 25 per cent chance that something has happened: Hotson v East Berkshire Health Authority [1987] AC 750. Either it has or it has not.
And the judgement continues:
In deciding a question of past fact the court will, of course, give the answer which it believes is more likely to be (more probably) the right answer than the wrong answer, but it arrives at its conclusion by considering on an overall assessment of the evidence (i.e. on a preponderance of the evidence) whether the case for believing that the suggested event happened is more compelling than the case for not reaching that belief (which is not necessarily the same as believing positively that it did not happen).
Which is exactly the bayesian approach of 'probability as state of knowledge'.
The quotes are contradictory. The Bayesian approach agrees with the second and disagrees with the first.
I suppose you could weasel in some sort of consistency about the wording ... something like "You cannot properly say there is a 25 percent chance that something has happened, but you can properly say you believe there is a 25 percent chance that something has happened." OK, fine, but what's the difference? You still assign a percent probability to the chance that something happened, and you still rule on the basis of those beliefs. Or do you? (It's not clear to me from those quotes.)
Sherlock Holmes was a detective, not a judge. What was appropriate for his line of work isn't appropriate in the courtroom. Courts are not about what MIGHT happen (or have happened); they are about what DID (or did not) happen. Probability can be an excellent guide for further investigation into these things, but it should be left to the investigators. If they can't find anything harder than a set of odds, then judges have no place betting on them.
Still, the argument "either A has happened or it has not happened" is silly. What matters is the knowledge we base our decisions on (ultimately we don't even know the outside world really exists, we only have a set of measurements and infer the existence of an outside world from that), and sometimes that is only partially certain.
> If they can't find anything harder than a set of odds, then judges have no place betting on them.
But this is how our entire legal system is set up. For criminal proceedings, we have to find the defendant 'guilty beyond reasonable doubt'. What is 'reasonable doubt' if not a (theoretically) quantifiable or parameterizeable representation of our belief that the defendant committed the criminal act?
We don't explicitly quantify this threshold as, eg. a 95% chance, but that's still the exact same process we require a jury to undergo, even if we don't attach quantifiable numbers to the results.
The jury can never return a probability of 1 (in statistical terms, 'almost certain'), or else the entire appeal system wouldn't exist.
this seems like a complete over simplification and naive analysis of the situation - and the argument smells like a straw man and appeal to authority, comparing it to sherlock holmes - who incidentally /is obviously wrong/. the idea that 'whatever remains' is quantifiable and finite in the real world is a stark contrast with reality - and actually the fact that a probability is precisely 0 or 1 is very significant to its power in drawing conclusions. so not only does forbidding probability not forbid sherlock holmes style 'logic' but his logic is broken anyway in the vast majority of real world cases.
what the court says is common sense... its a shame it offends some academic view... oh, no, wait, it really isn't.
> this seems like a complete over simplification and naive analysis of the situation
The key phrase in the linked article:
> and so I must now tell them that the entire philosophy behind their course has been declared illegal in the Court of Appeal. I hope they don't mind.
Is what reveals the misunderstanding. Judges don't "declare" things illegal, they rule on matters of what the law is. That the law requires judgements to be rendered in certain terms is not a statement of the legality of making Bayesian arguments.
The law requires a decision. The conceptual basis of the common law is a promise that a court will always render a clear and specific set of rulings or orders, and that there will always be an explanation of those rulings or orders. Judges are not free to say to disputants that they are X% likely to win their claim.
This is one of those places where Bayesian probability isn't a good fit. Fuzzy logic -- so passé these days -- at least has an understood mechanism for "defuzzifying" in a fashion which lawyers would find quite familiar.
It also overlooks that judges have many other places to introduce flexibility and weighed judgement. For example, judges may assign blame in portions for some crimes or torts; they may reject, moderate or modify some claims in equity; they have leeway to combine multiple considerations and legislative constraints in handing down criminal sentences and so on.
One thing that bugs me tremendously about outsiders looking in at law is the assumption that, since lawyers don't immediately and entirely embrace new idea X, they are fusty old fools who are an impediment to the good. It's an argument born of ignorance that lawyers are deliberately obtuse fools, or judges out-of-touch theoreticians. Lawyers and judges touch on more problem domains in more depth, with greater consequences, than pretty much every academic and every software developer.
and the argument smells like a straw man and appeal to authority, comparing it to sherlock holmes
A humorous reference to literature is hardly an appeal to authority.
who incidentally /is obviously wrong/
He's not /obviously wrong/, he's subtly wrong in not explicitly acknowledging that the enumeration of options, not evaluation of their probability, is usually the difficult part. He's obviously right if you assume all possibilities are enumerated.
actually the fact that a probability is precisely 0 or 1 is very significant to its power in drawing conclusions
0 and 1 are not probabilities, and while certainty has power in drawing conclusions, it only really occurs in purely theoretical discussions.
what the court says is common sense...
Whether it's common sense or not does not make it valid.
oh, no, wait, it really isn't
Is that about it being common sense, or it offending academic views?
"I teach the Bayesian approach to post-graduate students attending my 'Applied Bayesian Statistics' course at Cambridge, and so I must now tell them that the entire philosophy behind their course has been declared illegal in the Court of Appeal. I hope they don't mind."
Classic. I was told (at same uni) that, in a case involving DNA evidence, I could ask to be dismissed from a Jury, citing training in Bayesian statistics (which was prohibited because reversing the scientific certainty to look at false positives, in cases with weak circumstantial evidence, essentially kills DNA evidence's usefulness. If someone was in the right city, you expect hundreds of DNA matches out of millions of people, but if you can track them to the right street at the right time, you would expect 0.001 matches, and thus the evidence points to them much more strongly.)
Everything you hear about law that was not conveyed to you by a lawyer, a law lecturer or a judge is probably horseshit.
People love to think they know some obscure wrinkle, some cool loophole, some nifty curio about the law. But so many of the stories you hear are just stories.
ps. a better way to get disqualified is to have studied law.
The quote is stupid. Nobody declared anything illegal. They put boundaries on what evidence can be presented to juries, which is a key function of judges.
In the American rules of evidence, evidence is only admissible if its probative value outweighs its prejudicial effect. That's why, for example, judges ban information, generally, of past crimes. While criminals tend to be more likely to commit another crime, the jury gives such evidence weight beyond its actual value. The same can be true of expert statistical testimony.
There it was held that an expert witness shouldn't use Bayesian reasoning to calculate probabilities to tell a jury "outside the field of DNA (and possibly other areas where there is a firm statistical base)".
I think that, by a 'firm statistical base', the court's getting at the sort of situations where the right prior is a widely agreed on (such as DNA, where the size of the DNA database is known), so there won't be much opportunity for different expert witnesses to disagree on probabilities due to having different priors. (N.B. IANAL)
This is still nonsense, IMHO. I can understand the court not wanting juries to be overly swayed by a spuriously precise probability that might have been very different if a different expert had been chosen. But there's no justification for restricting the statistical methods that the expert uses to reach their conclusion, however it's expressed to the jury.
Interestingly, that case shared one of the appeal judges from the case in TFA (Lord Justice Beatson, then Mr Justice Beatson).
I think the distinction being made here is the difference from using statistical methods for determining overall guilt versus using statistical methods for determining specific facts about the case.
We use statistics to say that someone was or was not at the scene of a crime
(DNA testing doesn't 100% guarantee that a persons' blood is actually a match), for example. We don't make the leap that since they were there, "odds are" they did it, however.
> * In upholding the first instance decision, the Court of Appeal reiterated the principle in cases where there are competing explanations for a particular loss that causation cannot be established only by a process of elimination such that the 'least unlikely' cause of a loss is identified. A claimant must demonstrate that the particular version of events that they rely upon is more likely to have happened than not, in order for the civil burden of proof to be satisfied.*
I'm not sure what the problem is.
"It could have been A, B, or C. It's really unlikely to have been A or B, and thus it must be C" is obviously flawed, because for all we know it could have been D, and even if it was C you need to show (on the balance of probabilities) that C is the cause. Not just that C is more likely than A or B.
It's not the judge's role to independently nominate D. It's up to the lawyers for the plaintiff and the respondent to present facts and make legal arguments.
The judge's role is to weigh those legal arguments, and in (most) civil trials to weigh the facts presented on the balance of probability, and to render a decision.
If you require judges to run a Bayesian network over the entire universe for each case, the legal system will get a wee bit slower.
The problem probably lies in the fact that you cannot prove something by reduction to the absurd based on a probability distribution.
Because law requires positive proof (or it should) not just "this chain of events is so unlikely under other assumptions that our assumptions must be right". This is where Sherlock's quotation is misleading: he says "rule out the IMPOSSIBLE", not the improbable.
The judge is right epistemologically. Bayesian statistics has only a terminological issue.
The whole basis of law is elimination of "epistemic uncertainty" and threshold of evidence for an event. If this is not possible, the case is not proved, therefore case dimissed.
Sherlock Holme's statement is a statement of method, not statement of proof.
In other words, the law is far greater than Bayesian probability or any method, including scientific, since the burden of proof is from the evidence with the primary legal threshold of "innocent until proven guilty".
Poker players understand that a lack of knowledge of past events is exactly the same as uncertainty about future events.
For example, if the first round of hold em has dealt two cards to each of 9 players, there are 18 cards out and you only know the state of two of them. The odds that the flop will contain specific cards that help your hand are calculated against the total number of unknown cards, regardless if the unknowns are held by other players or still in the deck.
For example, you get a pair of 3s. There are two more threes in the deck, and it is pretty likely that if the flop contains one of them that you will have the best hand at the table at that point. For the first card turned up at the flop, there is a 2/50 chance it will be a three, the next is 2/49, the next 2/48. It doesn't matter how many face-down cards have been dealt to your opponents.
All the unknowns are still in the pool of possibilities, and it is no different when you assess the evidence you have of any other type of event that has already happen. Each bit of evidence means what means, and the unknowns contribute to the pool of uncertainty.
I not sure what the OP expects the court of appeal to do. They are tasked with determining if there is cause for appeal; in this specific case whether the original judge erred in law. They determined that he did not. They aren't saying whether the judgement was right or wrong, rather that the judgement was arrived at in a lawful manner.
For anyone curious about how this would play out in the US, federal courts and most state courts use the Daubert standard. TLDR: experts (including statisticians) can testify if they're using fairly standard methods and there are no significant gaps in the evidence-->analysis-->testimony chain.
Accidents, murders, fires, etc... are all unlikely events by nature. Judging whether one of them can have occurred based on probability does not prove anything. The author post is wrong is the assumption that "an unlikely event"="impossible". Probability does NOT ascertain certitude. There is always an expression of confidence which is not equal to 100%, and therefore it seems logical that a court does not take the probability of an event as a tangible proof of what did or did not happen. Probability != science.
90% of the work of a jury trial is managing human reactions to things. There are things that judges need to know about statistics. E.g. That in modern DNA forensics, lab error totally dominates the probability of a coincidental match, limiting accuracy to 1-2%. On the other hand, they must be deeply cognizant of human responses. A mathematician, told that some test shows a person 1,000 times more likely than average to be the killer, will intuitively realize that the person is still unlikely to be the killer. A jury of ordinary people won't. We have judges precisely to mediate between these two domains.
Mathematicians are people who define probability as the measure of some well-defined subset of some well-defined set of all events.
The people who use probability theory to build approximate models of real world events are called statisticians (and those who forget about "approximate" - applied statisticians).
[+] [-] blauwbilgorgel|13 years ago|reply
There never was solid evidence against her. She was a suspect because nearly everytime someone died in the hospital she was on her shift.
The initial chance of her being present when these supposed murders happened, and being innocent, was estimated at 1 in 7 billion. This number caused the police to focus their research completely on Lucia de B.
Eventually she was convicted for the murders based not on hard evidence or witnesses, but on statistics. The chance of innocence was put on 1 in 342 million.
The econometrist Aart de Vos was the first to notice that the initial Bayesian analysis was plain wrong. For example they had presumed the murderer had to be found amongst the nurses, other possibilities were neglected. They also hadn't corrected for combined P-values. He reduced the chance Lucia was innocent to 1 in a million.
The court said they had abandoned the statistical "proof", but remained of the view that it couldn't be a coincidence. Possible murder cases were chosen when they were negative for Lucia, other possible murder cases were left out of the equation. This further reduced the chance to 1 in 50.
Statisticians Richard Gill and Piet Groeneboom further reduced the chance to 1 in 9.
In my opinion statistics alone can never be adequate for conviction. It can muddy research and lead to confirmation bias. And the difference between a Bayesian chance of 1 in 7 billion and 1 in 9 is so big as to doubt the initial use of statistics even further.
http://en.wikipedia.org/wiki/Lucia_de_Berk#Statistical_argum...
[+] [-] pseut|13 years ago|reply
I'm really sympathetic to your point. I would not want the guilt or innocence of myself, a family member, a friend, etc depend on the numeracy and statistical literacy of a jury, with the stats explained to them through lawyers (even my own!)
As to the OP, the key sentence for me is:
> ...the fire had been started by a discarded cigarette, ... the other two explanations were even more implausible
Only TWO other possible explanations for the cause of a fire!?!?! Regardless of the word choice, I think the appeals court has a better intuitive understanding of some of the pitfalls of model-based Bayesian statistics than the OP (namely, if you accidentally put zero weight on parts of the prior, you will necessarily put zero weight on those parts of the posterior).
edit: if I'm going to be listing credentials, I should point out that my econ phd is in econometrics; I don't want to give the impression that my MS in stats is a sufficient credential to teach econ phd students.
[+] [-] davorak|13 years ago|reply
How is this different then an expert on autopsies incorrectly does the job which leads to a potential false conclusion.
In both case an expert made a mistake or did their job poorly leading to a potentially unfortunate outcome. Why should one be banned and not the other?
[+] [-] crazygringo|13 years ago|reply
In that case, her chance of being present would be 1 in 1. But it doesn't mean she did it.
[+] [-] DanBC|13 years ago|reply
This is in an English court, and it is a civil case. Thus, there is no "conviction". For a criminal case the standard is "beyond reasonable doubt". For a civil case the standard is "balance of probability".
So, no-one is going to jail or getting a criminal record. But they might have to pay compensation to someone else for fire damage.
[+] [-] delinka|13 years ago|reply
Such is the case in a fearful populace, and one that demands vengeance over justice. I appears as though something bad happened, so someone must pay and ... here's a magic forumla with missing variables that says you might have done it so ... GUILTY.
[+] [-] hadronzoo|13 years ago|reply
The initial analysis used Fisher's Exact Test, which is a frequentist rather than a Bayesian method.
[+] [-] mikeash|13 years ago|reply
I'm not sure I understand what you suggest there, because, what else is there?
Let's say a murder conviction is based on 37 different security recordings clearly showing the defendant murdering the victim. The recordings make it a certainty that the defendant is guilty.
But what is "certainty"? It's an expression of probability, and that comes from statistics. Given the evidence, what are the odds that the defendant is not guilty? Extremely low. We have to come up with pretty crazy alternatives. The probability of the defendant being innocent in this case might be, let's say, 1 in 10^20.
We don't need to actually carry out this calculation, because we don't need the precise probability. It's clear that the probability is extremely low, so we can take a shortcut and refrain from figuring out exactly how low. But that shortcut is still ultimately an exercise in statistics.
[+] [-] Anderkent|13 years ago|reply
If the court argued from human unreliability in enumerating options, the judgement would be sensible. But simply arguing from a mistaken understanding of probability makes it look silly.
[+] [-] lucisferre|13 years ago|reply
So many convictions come based on simple circumstantial evidence without even considering the statistical basis for those circumstances. Many convictions are entirely emotional bias on the part of the jury. Why should allowing statistical analysis to be considered as evidence be considered worse than the status quo?
[+] [-] Anderkent|13 years ago|reply
The chances of something happening in the future may be expressed in terms of percentage. Epidemiological evidence may enable doctors to say that on average smokers increase their risk of lung cancer by X%. But you cannot properly say that there is a 25 per cent chance that something has happened: Hotson v East Berkshire Health Authority [1987] AC 750. Either it has or it has not.
And the judgement continues:
In deciding a question of past fact the court will, of course, give the answer which it believes is more likely to be (more probably) the right answer than the wrong answer, but it arrives at its conclusion by considering on an overall assessment of the evidence (i.e. on a preponderance of the evidence) whether the case for believing that the suggested event happened is more compelling than the case for not reaching that belief (which is not necessarily the same as believing positively that it did not happen).
Which is exactly the bayesian approach of 'probability as state of knowledge'.
[+] [-] bo1024|13 years ago|reply
I suppose you could weasel in some sort of consistency about the wording ... something like "You cannot properly say there is a 25 percent chance that something has happened, but you can properly say you believe there is a 25 percent chance that something has happened." OK, fine, but what's the difference? You still assign a percent probability to the chance that something happened, and you still rule on the basis of those beliefs. Or do you? (It's not clear to me from those quotes.)
[+] [-] Millennium|13 years ago|reply
Sherlock Holmes was a detective, not a judge. What was appropriate for his line of work isn't appropriate in the courtroom. Courts are not about what MIGHT happen (or have happened); they are about what DID (or did not) happen. Probability can be an excellent guide for further investigation into these things, but it should be left to the investigators. If they can't find anything harder than a set of odds, then judges have no place betting on them.
[+] [-] Tichy|13 years ago|reply
[+] [-] alan_cx|13 years ago|reply
[+] [-] chimeracoder|13 years ago|reply
But this is how our entire legal system is set up. For criminal proceedings, we have to find the defendant 'guilty beyond reasonable doubt'. What is 'reasonable doubt' if not a (theoretically) quantifiable or parameterizeable representation of our belief that the defendant committed the criminal act?
We don't explicitly quantify this threshold as, eg. a 95% chance, but that's still the exact same process we require a jury to undergo, even if we don't attach quantifiable numbers to the results.
The jury can never return a probability of 1 (in statistical terms, 'almost certain'), or else the entire appeal system wouldn't exist.
[+] [-] jheriko|13 years ago|reply
what the court says is common sense... its a shame it offends some academic view... oh, no, wait, it really isn't.
[+] [-] jacques_chester|13 years ago|reply
The key phrase in the linked article:
> and so I must now tell them that the entire philosophy behind their course has been declared illegal in the Court of Appeal. I hope they don't mind.
Is what reveals the misunderstanding. Judges don't "declare" things illegal, they rule on matters of what the law is. That the law requires judgements to be rendered in certain terms is not a statement of the legality of making Bayesian arguments.
The law requires a decision. The conceptual basis of the common law is a promise that a court will always render a clear and specific set of rulings or orders, and that there will always be an explanation of those rulings or orders. Judges are not free to say to disputants that they are X% likely to win their claim.
This is one of those places where Bayesian probability isn't a good fit. Fuzzy logic -- so passé these days -- at least has an understood mechanism for "defuzzifying" in a fashion which lawyers would find quite familiar.
It also overlooks that judges have many other places to introduce flexibility and weighed judgement. For example, judges may assign blame in portions for some crimes or torts; they may reject, moderate or modify some claims in equity; they have leeway to combine multiple considerations and legislative constraints in handing down criminal sentences and so on.
One thing that bugs me tremendously about outsiders looking in at law is the assumption that, since lawyers don't immediately and entirely embrace new idea X, they are fusty old fools who are an impediment to the good. It's an argument born of ignorance that lawyers are deliberately obtuse fools, or judges out-of-touch theoreticians. Lawyers and judges touch on more problem domains in more depth, with greater consequences, than pretty much every academic and every software developer.
[+] [-] Anderkent|13 years ago|reply
A humorous reference to literature is hardly an appeal to authority.
who incidentally /is obviously wrong/
He's not /obviously wrong/, he's subtly wrong in not explicitly acknowledging that the enumeration of options, not evaluation of their probability, is usually the difficult part. He's obviously right if you assume all possibilities are enumerated.
actually the fact that a probability is precisely 0 or 1 is very significant to its power in drawing conclusions
0 and 1 are not probabilities, and while certainty has power in drawing conclusions, it only really occurs in purely theoretical discussions.
what the court says is common sense...
Whether it's common sense or not does not make it valid.
oh, no, wait, it really isn't
Is that about it being common sense, or it offending academic views?
[+] [-] andrewaylett|13 years ago|reply
After all, it's pretty much certain that you don't know everything, isn't it?
(For clarity: I'm not entirely serious, but I do agree with the parent's opinion on Sherlock Holmes.)
[+] [-] tehwalrus|13 years ago|reply
Classic. I was told (at same uni) that, in a case involving DNA evidence, I could ask to be dismissed from a Jury, citing training in Bayesian statistics (which was prohibited because reversing the scientific certainty to look at false positives, in cases with weak circumstantial evidence, essentially kills DNA evidence's usefulness. If someone was in the right city, you expect hundreds of DNA matches out of millions of people, but if you can track them to the right street at the right time, you would expect 0.001 matches, and thus the evidence points to them much more strongly.)
[+] [-] jacques_chester|13 years ago|reply
People love to think they know some obscure wrinkle, some cool loophole, some nifty curio about the law. But so many of the stories you hear are just stories.
ps. a better way to get disqualified is to have studied law.
[+] [-] rayiner|13 years ago|reply
In the American rules of evidence, evidence is only admissible if its probative value outweighs its prejudicial effect. That's why, for example, judges ban information, generally, of past crimes. While criminals tend to be more likely to commit another crime, the jury gives such evidence weight beyond its actual value. The same can be true of expert statistical testimony.
[+] [-] Anderkent|13 years ago|reply
Could you elaborate? What kind of (valid) use does DNA evidence has that is disproved by bayesian reasoning?
[+] [-] SEMW|13 years ago|reply
There it was held that an expert witness shouldn't use Bayesian reasoning to calculate probabilities to tell a jury "outside the field of DNA (and possibly other areas where there is a firm statistical base)".
I think that, by a 'firm statistical base', the court's getting at the sort of situations where the right prior is a widely agreed on (such as DNA, where the size of the DNA database is known), so there won't be much opportunity for different expert witnesses to disagree on probabilities due to having different priors. (N.B. IANAL)
This is still nonsense, IMHO. I can understand the court not wanting juries to be overly swayed by a spuriously precise probability that might have been very different if a different expert had been chosen. But there's no justification for restricting the statistical methods that the expert uses to reach their conclusion, however it's expressed to the jury.
Interestingly, that case shared one of the appeal judges from the case in TFA (Lord Justice Beatson, then Mr Justice Beatson).
[+] [-] diminoten|13 years ago|reply
We use statistics to say that someone was or was not at the scene of a crime (DNA testing doesn't 100% guarantee that a persons' blood is actually a match), for example. We don't make the leap that since they were there, "odds are" they did it, however.
[+] [-] DanBC|13 years ago|reply
I'm not sure what the problem is.
"It could have been A, B, or C. It's really unlikely to have been A or B, and thus it must be C" is obviously flawed, because for all we know it could have been D, and even if it was C you need to show (on the balance of probabilities) that C is the cause. Not just that C is more likely than A or B.
[+] [-] jacques_chester|13 years ago|reply
The judge's role is to weigh those legal arguments, and in (most) civil trials to weigh the facts presented on the balance of probability, and to render a decision.
If you require judges to run a Bayesian network over the entire universe for each case, the legal system will get a wee bit slower.
[+] [-] flexie|13 years ago|reply
This is a question about whether or not a cigarette bud started a fire, not a rejection of bayesian probability as such.
This guy should stick to teaching statistics, not law.
[+] [-] 4ad|13 years ago|reply
[+] [-] PaulHoule|13 years ago|reply
[+] [-] smoyer|13 years ago|reply
[+] [-] fauigerzigerk|13 years ago|reply
[+] [-] jacques_chester|13 years ago|reply
It is not correct.
The Court of Appeals has not "banned" Bayesian probability.
It does not have the power to do so.
[+] [-] pfortuny|13 years ago|reply
Because law requires positive proof (or it should) not just "this chain of events is so unlikely under other assumptions that our assumptions must be right". This is where Sherlock's quotation is misleading: he says "rule out the IMPOSSIBLE", not the improbable.
The judge is right epistemologically. Bayesian statistics has only a terminological issue.
[+] [-] drucken|13 years ago|reply
Sherlock Holme's statement is a statement of method, not statement of proof.
In other words, the law is far greater than Bayesian probability or any method, including scientific, since the burden of proof is from the evidence with the primary legal threshold of "innocent until proven guilty".
[+] [-] jeremyjh|13 years ago|reply
For example, if the first round of hold em has dealt two cards to each of 9 players, there are 18 cards out and you only know the state of two of them. The odds that the flop will contain specific cards that help your hand are calculated against the total number of unknown cards, regardless if the unknowns are held by other players or still in the deck.
For example, you get a pair of 3s. There are two more threes in the deck, and it is pretty likely that if the flop contains one of them that you will have the best hand at the table at that point. For the first card turned up at the flop, there is a 2/50 chance it will be a three, the next is 2/49, the next 2/48. It doesn't matter how many face-down cards have been dealt to your opponents.
All the unknowns are still in the pool of possibilities, and it is no different when you assess the evidence you have of any other type of event that has already happen. Each bit of evidence means what means, and the unknowns contribute to the pool of uncertainty.
[+] [-] jasonlingx|13 years ago|reply
[+] [-] rayiner|13 years ago|reply
[+] [-] mmcnickle|13 years ago|reply
[+] [-] misnome|13 years ago|reply
[+] [-] Colman|13 years ago|reply
http://en.wikipedia.org/wiki/Daubert_standard http://en.wikipedia.org/wiki/Daubert_v._Merrell_Dow_Pharmace...
[+] [-] ekianjo|13 years ago|reply
[+] [-] smoyer|13 years ago|reply
[+] [-] rayiner|13 years ago|reply
[+] [-] jacques_chester|13 years ago|reply
But it deals with the largest and messiest possible problem domain: everything humans do.
[+] [-] mich41|13 years ago|reply
The people who use probability theory to build approximate models of real world events are called statisticians (and those who forget about "approximate" - applied statisticians).
[+] [-] Mvandenbergh|13 years ago|reply
[+] [-] muuh-gnu|13 years ago|reply
[+] [-] jheriko|13 years ago|reply