Relatedly, David Deutsch's "Simple refutation of the ‘Bayesian’ philosophy of science"
> By ‘Bayesian’ philosophy of science I mean the position that (1) the objective of science is, or should be, to increase our ‘credence’ for true theories, and that (2) the credences held by a rational thinker obey the probability calculus. However, if T is an explanatory theory (e.g. ‘the sun is powered by nuclear fusion’), then its negation ~T (‘the sun is not powered by nuclear fusion’) is not an explanation at all. Therefore, suppose (implausibly, for the sake of argument) that one could quantify ‘the property that science strives to maximise’. If T had an amount q of that, then ~T would have none at all, not 1-q as the probability calculus would require if q were a probability.
> Also, the conjunction (T₁ & T₂) of two mutually inconsistent explanatory theories T₁ and T₂ (such as quantum theory and relativity) is provably false, and therefore has zero probability. Yet it embodies some understanding of the world and is definitely better than nothing.
> Furthermore if we expect, with Popper, that all our best theories of fundamental physics are going to be superseded eventually, and we therefore believe their negations, it is still those false theories, not their true negations, that constitute all our deepest knowledge of physics.
> What science really seeks to ‘maximise’ (or rather, create) is explanatory power.
Any refutation that depends on the fundamental unknownability of the Universe rules trivially applies to every single philosophy of science.
Science must work despite it, or you don't have science.
And any other singularity you get from assuming the odds of a hypothesis is infinitely smaller than the odds of it being false is unrealistic. You shouldn't assume that.
> we expect, with Popper, that all our best theories of fundamental physics are going to be superseded eventually
This inductive case against scientific knowledge should only serve to decrease our second-order credence in the proposition that we have assigned the highest credence to the scientific hypotheses that most closely correspond with reality. It does nothing to change the fact that, conditional on evidence we currently have, we may very well have correctly proportioned credence.
Bullshit like this is exactly why I think scientists are better philosophers than philosophers are. The text you've quoted, is, frankly, not the writings of an intelligent person.
The reason I'm being very blunt about this, is because bullshit like this is actively harmful. Science is fucking important. Science is what resulted in the technology you're using to read this. Science is, with non-negligible probability, the basis of medicine that prevented you from dying before the age of 5 to be able to read this. When philosophers posit that they can inspect the their own navels and find deep truths about the world, they are undermining one of the fundamental pillars of society that holds up so much of the positive changes humans have been able to make.
We need to call this what it is--nonsense and misinformation--and stop amplifying its signal.
> By ‘Bayesian’ philosophy of science I mean the position that (1) the objective of science is, or should be, to increase our ‘credence’ for true theories, and that (2) the credences held by a rational thinker obey the probability calculus. However, if T is an explanatory theory (e.g. ‘the sun is powered by nuclear fusion’), then its negation ~T (‘the sun is not powered by nuclear fusion’) is not an explanation at all. Therefore, suppose (implausibly, for the sake of argument) that one could quantify ‘the property that science strives to maximise’. If T had an amount q of that, then ~T would have none at all, not 1-q as the probability calculus would require if q were a probability.
Of course "the sun is not powered by nuclear fusion" IS an explanation, it's just not an explanation of a phenomenon we observe, which is why most scientists don't believe "the sun is not powered by nuclear fusion". If we observed something about the sun that was not consistent with the hypothesis that it is powered by nuclear fusion, "the sun is not powered by nuclear fusion" would indeed be an explanation of what we were observing.
This is all sidestepping the absurdity that Deutsch doesn't seem to understand that "none at all" has a mathematical representation, 0, meaning that if p = 1 - q = 0, then q = 1. This is not difficult math here, folks.
> Also, the conjunction (T₁ & T₂) of two mutually inconsistent explanatory theories T₁ and T₂ (such as quantum theory and relativity) is provably false, and therefore has zero probability. Yet it embodies some understanding of the world and is definitely better than nothing.
Uh sure, which is why nobody with a brain takes the conjunction of those two things. This isn't a criticism of Bayesian philosophy of science, it's a straw man argument.
> Furthermore if we expect, with Popper, that all our best theories of fundamental physics are going to be superseded eventually, and we therefore believe their negations, it is still those false theories, not their true negations, that constitute all our deepest knowledge of physics.
Thank you for beating me to it. Astounding how those LessWrong dorks were able to revive the corpse of this dead end of epistemology after it was so thoroughly destroyed by Popper. And to what benefit.. sex crimes, massive financial fraud, murder cults and a far right government?
I never thought I'd see a misunderstanding of what "implies" means in science versus in logic be the fundamental mistake made in a paper on logic for science.
Here's the truth table for implies (if) in logic.
| A | B | If A then B |
|---+---+-------------|
| T | T | T |
| T | F | F |
| F | T | T |
| F | F | T |
Show this to anyone in the sciences who hasn't done logic and you'll instantly get the objections "But hang on, the two rows at the bottom don't fit!".
This is where you need to add temporal logic so that the scientific understanding of A casually implies B can be represented in logic.
In short the paper does nothing of the sort of what it says it does because it fundamentally uses the wrong tool for the job.
First, I think Popper did not fundamentally disagree with the Bayesian approach: ultimately his critique of Bayesianism is a small adjustment to Bayesianism. When presented with a set of competing hypotheses { h_0, h_1, h_2... }, Bayesianism says that we assign a probability to each, whereas Popper points out that experiments and observation never really actually add credence to a hypothesis, they rather only decrease the probability of a hypothesis.
In theory, decreasing the probability of hypothesis h_0 does not increase the probability of hypothesis h_1, because the set of hypotheses is infinite--that is to say for any n there is always a potential h_{n+1} which has not been thought of by scientists. We know that the sum of the probabilities of hypotheses must be 1, but since the set of hypotheses is an infinite set, decreasing the probability of h_0 does not necessarily increase the probability of h_1, because the probability of any h_n in the set could be increasing.
I think Popper is right in theory, but in practice, I think this is less important than philosophers think it is. Pragmatically, we can treat the set of hypotheses as finite, operating only on the hypotheses humans have... hypothesized. We have no way to operate on hypotheses nobody has thought of, so we just operate on the set of hypotheses we have thought of. Since the sum of this set of probabilities must be 1, decreasing the probability of one hypothesis does increase the probability of all the other hypotheses in the set. Where Popper's critique becomes important is if we keep decreasing the probabilities of all the hypotheses in the set--this indicates that the hypothesis which is true is not in the set (i.e. nobody has come up with the correct hypothesis to test). This indicates a need for new hypotheses. But in a lot of cases, experimentation keeps decreasing the probability of all the hypotheses except one in the finite set of hypotheses humans have thought of, while numerous attempts to decrease the probability of that one hypothesis fail. While Popper would say this does not increase the probability of that one hypothesis, operating as if it does increase the probability of that hypothesis seems to work. We've been able to go to the moon, eradicate smallpox, and build intelligent-seeming machines, all based on hypotheses that we "increased the probability of" in this way, even though increasing the probability of a hypothesis is theoretically impossible.
This all goes back to philosophers' favorite navelgazing claim: that nothing is knowable. Ultimately, I think this is a dishonest argument which even philosophers don't believe. I've offered to punch many a philosopher in the face: after all, it's not knowable that it's going to hurt. But strangely, philosophers who claim to believe that nothing is knowable DO seem to know that will hurt, and none have taken me up on my offer.*
* Unlike the philosophers, I do believe in (probablistic) knowability, and I'm highly confident that punching them in the face would hurt them, so if anyone actually takes me up on this offer, I (probably) won't punch them. So far, no one has called my bluff.
dotsam|1 year ago
> By ‘Bayesian’ philosophy of science I mean the position that (1) the objective of science is, or should be, to increase our ‘credence’ for true theories, and that (2) the credences held by a rational thinker obey the probability calculus. However, if T is an explanatory theory (e.g. ‘the sun is powered by nuclear fusion’), then its negation ~T (‘the sun is not powered by nuclear fusion’) is not an explanation at all. Therefore, suppose (implausibly, for the sake of argument) that one could quantify ‘the property that science strives to maximise’. If T had an amount q of that, then ~T would have none at all, not 1-q as the probability calculus would require if q were a probability.
> Also, the conjunction (T₁ & T₂) of two mutually inconsistent explanatory theories T₁ and T₂ (such as quantum theory and relativity) is provably false, and therefore has zero probability. Yet it embodies some understanding of the world and is definitely better than nothing.
> Furthermore if we expect, with Popper, that all our best theories of fundamental physics are going to be superseded eventually, and we therefore believe their negations, it is still those false theories, not their true negations, that constitute all our deepest knowledge of physics.
> What science really seeks to ‘maximise’ (or rather, create) is explanatory power.
https://www.daviddeutsch.org.uk/2014/08/simple-refutation-of...
marcosdumay|1 year ago
Science must work despite it, or you don't have science.
And any other singularity you get from assuming the odds of a hypothesis is infinitely smaller than the odds of it being false is unrealistic. You shouldn't assume that.
badRNG|1 year ago
This inductive case against scientific knowledge should only serve to decrease our second-order credence in the proposition that we have assigned the highest credence to the scientific hypotheses that most closely correspond with reality. It does nothing to change the fact that, conditional on evidence we currently have, we may very well have correctly proportioned credence.
adrianN|1 year ago
hammock|1 year ago
kerkeslager|1 year ago
The reason I'm being very blunt about this, is because bullshit like this is actively harmful. Science is fucking important. Science is what resulted in the technology you're using to read this. Science is, with non-negligible probability, the basis of medicine that prevented you from dying before the age of 5 to be able to read this. When philosophers posit that they can inspect the their own navels and find deep truths about the world, they are undermining one of the fundamental pillars of society that holds up so much of the positive changes humans have been able to make.
We need to call this what it is--nonsense and misinformation--and stop amplifying its signal.
> By ‘Bayesian’ philosophy of science I mean the position that (1) the objective of science is, or should be, to increase our ‘credence’ for true theories, and that (2) the credences held by a rational thinker obey the probability calculus. However, if T is an explanatory theory (e.g. ‘the sun is powered by nuclear fusion’), then its negation ~T (‘the sun is not powered by nuclear fusion’) is not an explanation at all. Therefore, suppose (implausibly, for the sake of argument) that one could quantify ‘the property that science strives to maximise’. If T had an amount q of that, then ~T would have none at all, not 1-q as the probability calculus would require if q were a probability.
Of course "the sun is not powered by nuclear fusion" IS an explanation, it's just not an explanation of a phenomenon we observe, which is why most scientists don't believe "the sun is not powered by nuclear fusion". If we observed something about the sun that was not consistent with the hypothesis that it is powered by nuclear fusion, "the sun is not powered by nuclear fusion" would indeed be an explanation of what we were observing.
This is all sidestepping the absurdity that Deutsch doesn't seem to understand that "none at all" has a mathematical representation, 0, meaning that if p = 1 - q = 0, then q = 1. This is not difficult math here, folks.
> Also, the conjunction (T₁ & T₂) of two mutually inconsistent explanatory theories T₁ and T₂ (such as quantum theory and relativity) is provably false, and therefore has zero probability. Yet it embodies some understanding of the world and is definitely better than nothing.
Uh sure, which is why nobody with a brain takes the conjunction of those two things. This isn't a criticism of Bayesian philosophy of science, it's a straw man argument.
> Furthermore if we expect, with Popper, that all our best theories of fundamental physics are going to be superseded eventually, and we therefore believe their negations, it is still those false theories, not their true negations, that constitute all our deepest knowledge of physics.
Deutsch apparently doesn't know what an approximation is, and instead thinks of correct/incorrect as a binary. Relevant https://hermiene.net/essays-trans/relativity_of_wrong.html
Mizza|1 year ago
Here's a fun one https://sci-hub.3800808.com/10.1038/302687a0
llm_trw|1 year ago
Here's the truth table for implies (if) in logic.
Show this to anyone in the sciences who hasn't done logic and you'll instantly get the objections "But hang on, the two rows at the bottom don't fit!".This is where you need to add temporal logic so that the scientific understanding of A casually implies B can be represented in logic.
In short the paper does nothing of the sort of what it says it does because it fundamentally uses the wrong tool for the job.
kerkeslager|1 year ago
First, I think Popper did not fundamentally disagree with the Bayesian approach: ultimately his critique of Bayesianism is a small adjustment to Bayesianism. When presented with a set of competing hypotheses { h_0, h_1, h_2... }, Bayesianism says that we assign a probability to each, whereas Popper points out that experiments and observation never really actually add credence to a hypothesis, they rather only decrease the probability of a hypothesis.
In theory, decreasing the probability of hypothesis h_0 does not increase the probability of hypothesis h_1, because the set of hypotheses is infinite--that is to say for any n there is always a potential h_{n+1} which has not been thought of by scientists. We know that the sum of the probabilities of hypotheses must be 1, but since the set of hypotheses is an infinite set, decreasing the probability of h_0 does not necessarily increase the probability of h_1, because the probability of any h_n in the set could be increasing.
I think Popper is right in theory, but in practice, I think this is less important than philosophers think it is. Pragmatically, we can treat the set of hypotheses as finite, operating only on the hypotheses humans have... hypothesized. We have no way to operate on hypotheses nobody has thought of, so we just operate on the set of hypotheses we have thought of. Since the sum of this set of probabilities must be 1, decreasing the probability of one hypothesis does increase the probability of all the other hypotheses in the set. Where Popper's critique becomes important is if we keep decreasing the probabilities of all the hypotheses in the set--this indicates that the hypothesis which is true is not in the set (i.e. nobody has come up with the correct hypothesis to test). This indicates a need for new hypotheses. But in a lot of cases, experimentation keeps decreasing the probability of all the hypotheses except one in the finite set of hypotheses humans have thought of, while numerous attempts to decrease the probability of that one hypothesis fail. While Popper would say this does not increase the probability of that one hypothesis, operating as if it does increase the probability of that hypothesis seems to work. We've been able to go to the moon, eradicate smallpox, and build intelligent-seeming machines, all based on hypotheses that we "increased the probability of" in this way, even though increasing the probability of a hypothesis is theoretically impossible.
This all goes back to philosophers' favorite navelgazing claim: that nothing is knowable. Ultimately, I think this is a dishonest argument which even philosophers don't believe. I've offered to punch many a philosopher in the face: after all, it's not knowable that it's going to hurt. But strangely, philosophers who claim to believe that nothing is knowable DO seem to know that will hurt, and none have taken me up on my offer.*
* Unlike the philosophers, I do believe in (probablistic) knowability, and I'm highly confident that punching them in the face would hurt them, so if anyone actually takes me up on this offer, I (probably) won't punch them. So far, no one has called my bluff.