I think the point of "correlation does not imply causation", refers to the literal prepositional logic sense of "=>".
Yes, correlation suggests causation, i.e. P(causation|correlation) > P(causation) from a Bayes perspective. That doesn't mean you should discount the possibility of ¬causation, merely that its probability is smaller. And "how much smaller" could be very close to 0, so it would still be hasty to say "implies", which linguistically implies "=>".
A better phrasing would be "correlation suggests but does not imply causation". (edit: e.g. as per that xkcd comic, mentioned by other posters. edit2: I mixed up the proof with the OP article. the proof uses "evidence of" which is also good.)
But yes, nice proof nonetheless. I like how causation is basically defined as P(c|a) = 1, showing how most complex philosophical issues are actually irrelevant (for this particular result).
I'm no expert on this subject but the proof in the first article seems a bit dodgy. Bayesian inference is a type of induction [1]. So using it to prove induction merely begs the question?
P(c|a) is not 1. If we assume correlation is Pearson, then the "shape" of the causation has to be a linear effect for P(c|a) = 1. A U-shaped effect will probably give you a Pearson 0, etc.
I do love the work of Pearl, et. al. But even they do not admit Pearson (or any other type) of correlation, but rather the nebulous "statistical dependence". So the proof only works if your statistical dependence tool of choice matches the nature of your causal effect perfectly.
The actual title is "If correlation doesn’t imply causation, then what does?" which I think is a much more useful title and question.
I think that you can answer that. Science is not a series of isolated experiments that stand or fall based on their particular data. Instead, all of our judgments of causation depend on a series of nest broad and narrow assumptions about the world. The broadest assumption is perhaps that we have a material world whose substance lacks the ability to intentionally sabotage our experiments and from which we can generate uniformly distributed random samples from. But there are whole range of assumptions below that.
From this view, "extraordinary claims require extraordinary evidence", essentially things are consistent with our existing assumptions still need evidence but not huge amounts. Things that are sudden changes in our whole understanding of the world require much more change. The faster-than-light neutrino experiments, in isolation, were probably a lot more convincing in just their statistics than a lot of experiments that get accepted without comment. But because such other experiments didn't contradict very established positions, their results weren't gone over with a fine-toothed comb. And that's how it should be.
Edit: the thing with a "calculus of casual inference" is that it also would have to include a way of taking into account the range of indirect assumptions that a given casual deduction depends on, so one would something like a knowledge-database.
This argument is absurd. It takes the obvious truth that causation will certainly lead to correlation and blindly flips it around and claims it's somehow profound. Just bad reasoning from beginning to end. The correct reasoning - causation leads to correlation - is the basis of all science. When scientific theories are evaluated, the razor is a check that the theory correctly predicts observed results. You can't cook up a theory that fits a set of already known data and claim you have formulated a causal relationship unless you can then predict the NEXT results over a substantial set of trials.
I would go one step further and argue that "causation" or "causality" is just a label that we attach to a special kind of correlation. The very concept of causality, it seems to me, is an attempt to impose human logical structures on a messy world that is fundamentally probabilistic.
The universe doesn't guarantee that if P, then Q. At best we can observe that if P at t1, then it is highly likely that Q at t2. We can often simplify that as "if P, then Q", just as we can approximate Einstein's physics with Newtonian physics for low-velocity applications. But at the end of the day, both are only approximations. The clear rules of logic only exist in our head, just as a perfect circle doesn't exist in reality.
If so, whether correlation implies causation is the wrong question to ask. A more important question is what kind of correlations we usually take to imply causation. We're probably looking at correlations that hold exclusively between two sets of events with an extremely high probability, with the right sort of temporal relationship. We could then say that those kinds of correlations simply are what we mean by causation, because there really is nothing else to say.
Once upon a time, most philosophers thought that the mind was some immaterial substance separate from the brain. Now many of us believe that certain functions of the brain are the mind. Perhaps we could apply a similar reductionism to the issue of correlations and causations, too.
First, correlation is symmetric between effects and their causes, while causation is not.
You may as well replace the phrase "correlation implies causation" with "correlation implies effectation".
For a careful treatment on correlation and causation, you should read Judea Pearl, one of the great living computer scientists. I highly recommend this casual read: https://www.nyu.edu/classes/shrout/SEM06/pearl.pdf
This article seems to miss the role of theories in physical sciences. When we talk about 'cause' we understand that to mean that some chain of events, governed by the rules of physics, lead to the result. Yes, those rules of physics were arrived at largely as a result of observation of correlations, but no one is going to propose military coups leading orange harvests as a fundamental physical law on the basis of observed correlation.
"That is, correlation in the data you happen to have (even if it happens to be “statistically significant”) does not necessarily imply correlation in the population of interest."
> They had done the chemical reaction that blew up the lab 175 times before without incident; then, suddenly, something went wrong and the lab went boom and real, actual people died.
I really wish the author would expand more on this. Maybe due to my shallow knowledge of statistics, I've recently become baffled by the fact that an arbitrary number is used as a confidence interval, to state that something is true or false. And I'd guess most of today's world depends on these confidence intervals. Why is it that we're OK with stating that something is true, if it's true 95% of the time? Or is it the case of good enough, it it ain't broke, don't fix it (until it isn't)?
We are OK with it because almost always we don't know a better way. Every time we use statistics and have certain finite sample size, these confidence intervals will emerge.
The distinction is obvious: correlation does not include an ordering, but causation does. You can observe that two things both happened, and that is correlation. You can observe that one thing happened, and then another thing happened after, and stipulate causation. You can increase your certainty by a controlled experiment.
It seems like all the author is really saying is that experiments aren't good enough to produce 100% certainty of causation. Not all that shocking. But the author also seems to conflate correlation with uncertainty, and this is probably where the title comes from: increasing certainty from controlled experiments implies causation.
Correlation has a formula, it detects the linear relation between two variables. So the quadratic relation is actually having zero correlation.
Second, in academic research, we mean 'correlation doesnt imply direct causation'. Because we're talking science (what's significant) not astrology (as above, so below).
For example, the octopus predicts the results of football match correctly most of the time. But as a scientific person, would you say that there is any conceivable causation?
This is quoting Hume... I always find it hard to get to understand these figures, because I feel I need to know who influences them, but then I need to know who influences them... and so on.
What is the best way of getting summaries of philosophy from as close to the beginning as possible?
[+] [-] xyience|10 years ago|reply
Edit: also regarding the last sentence, "...because all we have to help us establish causal relationships is correlation". The work of Pearl et al. give us quite a bit more: http://www.michaelnielsen.org/ddi/if-correlation-doesnt-impl...
[+] [-] infinity0|10 years ago|reply
Yes, correlation suggests causation, i.e. P(causation|correlation) > P(causation) from a Bayes perspective. That doesn't mean you should discount the possibility of ¬causation, merely that its probability is smaller. And "how much smaller" could be very close to 0, so it would still be hasty to say "implies", which linguistically implies "=>".
A better phrasing would be "correlation suggests but does not imply causation". (edit: e.g. as per that xkcd comic, mentioned by other posters. edit2: I mixed up the proof with the OP article. the proof uses "evidence of" which is also good.)
But yes, nice proof nonetheless. I like how causation is basically defined as P(c|a) = 1, showing how most complex philosophical issues are actually irrelevant (for this particular result).
[+] [-] xiler|10 years ago|reply
[1] http://plato.stanford.edu/entries/induction-problem/#BaySub
[+] [-] mhermher|10 years ago|reply
I do love the work of Pearl, et. al. But even they do not admit Pearson (or any other type) of correlation, but rather the nebulous "statistical dependence". So the proof only works if your statistical dependence tool of choice matches the nature of your causal effect perfectly.
[+] [-] joe_the_user|10 years ago|reply
I think that you can answer that. Science is not a series of isolated experiments that stand or fall based on their particular data. Instead, all of our judgments of causation depend on a series of nest broad and narrow assumptions about the world. The broadest assumption is perhaps that we have a material world whose substance lacks the ability to intentionally sabotage our experiments and from which we can generate uniformly distributed random samples from. But there are whole range of assumptions below that.
From this view, "extraordinary claims require extraordinary evidence", essentially things are consistent with our existing assumptions still need evidence but not huge amounts. Things that are sudden changes in our whole understanding of the world require much more change. The faster-than-light neutrino experiments, in isolation, were probably a lot more convincing in just their statistics than a lot of experiments that get accepted without comment. But because such other experiments didn't contradict very established positions, their results weren't gone over with a fine-toothed comb. And that's how it should be.
Edit: the thing with a "calculus of casual inference" is that it also would have to include a way of taking into account the range of indirect assumptions that a given casual deduction depends on, so one would something like a knowledge-database.
[+] [-] wai1234|10 years ago|reply
[+] [-] rileymat2|10 years ago|reply
[+] [-] ikeboy|10 years ago|reply
>Correlation doesn't imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing 'look over there'.
[+] [-] kijin|10 years ago|reply
The universe doesn't guarantee that if P, then Q. At best we can observe that if P at t1, then it is highly likely that Q at t2. We can often simplify that as "if P, then Q", just as we can approximate Einstein's physics with Newtonian physics for low-velocity applications. But at the end of the day, both are only approximations. The clear rules of logic only exist in our head, just as a perfect circle doesn't exist in reality.
If so, whether correlation implies causation is the wrong question to ask. A more important question is what kind of correlations we usually take to imply causation. We're probably looking at correlations that hold exclusively between two sets of events with an extremely high probability, with the right sort of temporal relationship. We could then say that those kinds of correlations simply are what we mean by causation, because there really is nothing else to say.
Once upon a time, most philosophers thought that the mind was some immaterial substance separate from the brain. Now many of us believe that certain functions of the brain are the mind. Perhaps we could apply a similar reductionism to the issue of correlations and causations, too.
[+] [-] dmvaldman|10 years ago|reply
You may as well replace the phrase "correlation implies causation" with "correlation implies effectation".
For a careful treatment on correlation and causation, you should read Judea Pearl, one of the great living computer scientists. I highly recommend this casual read: https://www.nyu.edu/classes/shrout/SEM06/pearl.pdf
[+] [-] shasta|10 years ago|reply
[+] [-] dcl|10 years ago|reply
http://andrewgelman.com/2014/08/04/correlation-even-imply-co...
"That is, correlation in the data you happen to have (even if it happens to be “statistically significant”) does not necessarily imply correlation in the population of interest."
[+] [-] golergka|10 years ago|reply
[+] [-] jimothyhalpert7|10 years ago|reply
I really wish the author would expand more on this. Maybe due to my shallow knowledge of statistics, I've recently become baffled by the fact that an arbitrary number is used as a confidence interval, to state that something is true or false. And I'd guess most of today's world depends on these confidence intervals. Why is it that we're OK with stating that something is true, if it's true 95% of the time? Or is it the case of good enough, it it ain't broke, don't fix it (until it isn't)?
[+] [-] Scea91|10 years ago|reply
[+] [-] j2kun|10 years ago|reply
It seems like all the author is really saying is that experiments aren't good enough to produce 100% certainty of causation. Not all that shocking. But the author also seems to conflate correlation with uncertainty, and this is probably where the title comes from: increasing certainty from controlled experiments implies causation.
[+] [-] dllthomas|10 years ago|reply
Increased purchases of gifts causes Christmas.
[+] [-] linhchi|10 years ago|reply
Second, in academic research, we mean 'correlation doesnt imply direct causation'. Because we're talking science (what's significant) not astrology (as above, so below).
For example, the octopus predicts the results of football match correctly most of the time. But as a scientific person, would you say that there is any conceivable causation?
The important word is conceivable.
[+] [-] theoh|10 years ago|reply
See for example the irrefutable possibility of https://en.m.wikipedia.org/wiki/Occasionalism
[+] [-] talles|10 years ago|reply
[+] [-] chris_wot|10 years ago|reply
What is the best way of getting summaries of philosophy from as close to the beginning as possible?
[+] [-] JeffreyKaine|10 years ago|reply
The book "Sophie's World" does a great intro here and it a really great weekend read.
http://www.amazon.com/Sophies-World-History-Philosophy-Class...
[+] [-] ArkyBeagle|10 years ago|reply
[+] [-] raould42|10 years ago|reply