Lee has spent his entire career grappling with the issue of science denial — he’s the author of books on post-truth and defending science from fraud, the latter of which he drew on for this essay. Here, he holds the state of the social sciences up to the prescientific “dark ages” of medicine, an unlikely source for guidance, and lays out a path forward.
"Like medicine, social science is subjective. And it is also normative. We have a stake not just in knowing how things are but also in using this knowledge to make things the way we think they should be. We study voting behavior in the interest of preserving democratic values. We study the relationship between inflation and unemployment in order to mitigate the next recession. Yet unlike medicine, so far social scientists have not proven to be very effective in finding a way to wall off positive inquiry from normative expectations, which leads to the problem that instead of acquiring objective knowledge we may only be indulging in confirmation bias and wishful thinking."
The article did not acknowledge the two biggest problems with the social sciences: inability to do many experiments because of ethical reasons, and the fact that these disciplines often study human social systems which have mutable "laws".
There's not much we can do about the first problem except slowly accrete knowledge through retrospective study of actual events, and maybe fill in some gaps with simulation.
The second problem is more solvable. I do believe that there are actually laws in the social sciences, but that they are far fewer in number than the phenomena people want to study. Many social phenomena are simply artifacts of existing systems, and may shed light on behavior in a particular case but are not generalizable beyond that system.
The law of supply and demand is a good example of a law that is universal, in many ways because it lies at the junction of physical systems and social systems. The physical aspects of supply and demand, for example the amount of arable land nearby as well as the caloric requirements of a population, can be measured well because of our physical knowledge and provide a decent jumping off place to expand the theory.
I disagree that some things are not quantifiable. Everything can be counted, from the number of neural connections in our brains to the number of widgets that a factory can produce. As usual, the limiting factor in science is not a lack of theory but a lack of instrumentation and thus data. I am basically an optimist about the social sciences as our measurement abilities continue to grow. However, we are in the dark ages regarding the tools we have to collect data.
I'm always amazed that someone who by all rights should be intimately familiar with the history of social science research like McIntyre can seemingly be so unaware of it. There was an empirical, modernist period in social sciences in the 40-80s of exactly the kind suggested by Lee and it was an absolute disaster. The results couldn't be related to the real world, and no one was really able to get over the inherent [1] influence of the author in the data. After a lot of arguing and furious paper writing in the late 80s-90s, everyone began working on how to deal with that bias, moving it front and center where it can be handled instead of subtly hiding it in a bunch of numbers and pretending it didn't exist. Does it make people who aren't familiar with what's going on think everyone is a bunch of raving lunatics? Yes. It creates a heck of an image problem, not helped by the fact that some smaller percentage of academics are ideological lunatics. But it's far more manageable for their peers to deal with than trying to solve an impossible problem.
[1] If you don't understand why that's an inherent problem, ask yourself whether a white male like myself would get the same results doing research on menstruation in Kabul as an Afghani woman would, or an Arab doing sentiment polls immediately after 9/11.
This is an intellectually lazy comment. It is by no means obvious that the gender and ethnicity of of the researcher would impact that hypothetical experiment, or what the impact would be, or that it couldn't be avoided with careful experimental design. If the social sciences had a better general track record of producing reproducible results with practical value for society then I would be inclined to give them the benefit of the doubt. But the reality is that most researchers have squandered their credibility by chasing mirages. If it were up to me I would cut funding for the entire field.
> After a lot of arguing and furious paper writing in the late 80s-90s, everyone began working on how to deal with that bias, moving it front and center where it can be handled instead of subtly hiding it in a bunch of numbers and pretending it didn't exist.
The social sciences have gained an incredible amount of control over academia. Why would they allow the current system, which works very well for them, to be "fixed"?
Overall my understanding is that it is harder to be an academic in almost any field these days, but especially the humanities - social sciences are an in-between zone, but I doubt the universities cutting language and history programs consider the social sciences more 'science' than 'humanities.
Seems to me that there is a stigma around social sciences - both for legitimate reasons, and because, like anything which shines a light on the uncomfortable (systemic bias, racism, capitalism, etc) people get defensive and afraid and reactive and concoct conspiracy theories.
I've always found it uneasy that much of the data in social science studies comprise 5-point scale answers by volunteers to questionnaires, or observations of microcosmic situations that do not really represent reality (e.g. attempting to prove a hypothesis about generosity by giving participants $20 each and observing how they spend/gift it under a specially designed situation).
I wonder whether questionnaires should be replaced by more objective metrics, such as heart rate, pupil dilation, blood hormone levels, EEGs of the participants.
The other problem I see is the lack of predictive models in the social sciences, especially psychology. The type of models we have today are akin to 'celestial spheres' in ancient physics and the boiler theory of fever in medieval medicine (which led to blood-letting).
Some questions to ponder: how is a record of behavior (the focus of behavioral science) any less objective than heart rate, pupil dilation, etc? Also, is heart rate a measure of psychological state really? Hormone levels are a mess as a measure of psychobehavioral state. EEGs are useful but are opaque in terms of underlying mechanisms.
People do use all the things you mention, and they're certainly useful, but also have limitations.
If you want to know how someone feels about liberal politics, for example (I'll let you pick the US or Britain), it's far easier and more direct to ask them than to try to infer it from heart rate responses etc.
I'm not saying those other things are useless, only that there's a reason Likert scales continue to be used so much.
Some food for thought from the other side of the coin:
In any event, my complaint about the piece is that it picks on the social sciences when biomedicine is ripe with corruption and irreproducibility itself. There's a kind of bullying that occurs with this; biomedicine is rife with problems so it takes them out on a scapegoat. (The social sciences do have many problems, but many of them apply equally well to other fields.)
The social scientists do everything for grant money. Foundations don't give grant money to figure out the truth. They get grant money to get someone to make some scientific sounding propaganda that supports whatever narrative they're pushing.
Who comes up with the narrative to push though, and why?
>The social scientists do everything for Grant money.
That's a very absolute statement. The social scientists do everything for Grant money. Do you have some sort of data to back that up? I mean in a way it is true that research can only be conducted if there is Grant money but that should be true for all of the sciences.
>Foundations don't give grant money to figure out the truth. They get grant money to get someone to make some scientific sounding propaganda that supports whatever narrative they're pushing.
This might be true sometimes but do you think it is true for the majority of research? Not all foundations even have a "narrative" as far as I can tell.
>Who comes up with the narrative to push though, and why?
Isn't that a weirdly open question to ask at the end of a very assertive statement?
This article conflates all research in the "social sciences", when in fact methodological practices vary widely within disciplines and sub-disciplines. Within each field, there is a "qualitative" literature, much less successful and popular than it once was, and probably not very useful, although I like some ethnographic and anthropoligical studies. Mainstream economics and political science is incredibly mathematically sophisticated, in fact people are coming around to the idea that there may have been too much emphasis on quantitative gymnastics above things like formulating simpler hypotheses or more descriptive work. Sociology has a bit of both. Psychology was the main culprit in the replication crisis, but even the behavioural psychology work cited approvingly in the article (Kahneman) is, I think, somewhat speculative in relating its hypotheses to experiment.
Without doubt, in economics and political science at least the problem is that the research questions are infinitely more complex than in 19C medicine; not that the methods are not quantitatively rigorous. The questions are obviously also of a more normative and moral nature than in medicine.
For those interested in this, take a look at the work regarding "grievance studies" by Peter Boghossian, James Lindsay, and Helen Pluckrose. They made intentionally unscientific studies with the intent of publication, and actually got themselves published on a few of their works. The objective was to demonstrate that not only does academia in the social sciences operate according to a completely different standard than the hard sciences, but is so ideologically driven that they behave in much the same way as a religion might.
I think the main takeaway from the Sokal Squared hoax was that reviewers in this field are either a) extremely biased and approve any results that agree with their beliefs, or b) anti-science, or c) scientifically illiterate, because they couldn't even spot the painfully clear methodological problems in the hoax studies that were inserted on purpose to test exactly the quality of review.
They would have got a lot more published except their experiment was sprung by some reporter.
The social sciences (along with most of the humanities) appear to be beyond saving at this point and it might be a good idea to make a hard break with the past and start again with people trained outside the field.
People (perhaps you too) seem to have the idea that only social sciences are vulnerable to publishing stings like the one Boghossian and his friends pulled off, but that's not true[0][3]. Next, all the places the "hoax" papers were [mostly] submitted to were of low prestige (i.e poor ranked)[1]. The paper they wrote as a summary of their conclusions does not even rigorously define what field(s) in the social sciences they have issues with, instead they collect it under the nebulous term "grievance studies" - which is a term they made up themselves to deride academics (so much for the scientific spirit of camaraderie[4]).
The "hoax" they pulled off doesn't seem to be showing what they say it does, at least not to the degree they suggest; from here[2]:
>Let’s analyze the hoax a bit more carefully. The team wrote up 21 bogus papers altogether. (The essay starts by saying there were only 20; according to Lindsay, that’s because two of the papers were largely similar to one another.) Of those 21, two-thirds never were accepted for publication. The Areo essay dwells on several papers that had been rejected outright, including one suggesting that white students should be enchained for the sake of pedagogy, and another proposing that self-pleasure could be a form of violence against women. They take it as a sign of intellectual decay that such papers managed to elicit respectful feedback from reviewers, even short of publication.
Academics warn against doing what Boghossian and friends did for their own good[5].
>The hoax was cruel; it sought to discredit targeted journals by setting a trap that exploited the scholarly predispositions of their editors and reviewers. Moreover, because of the anonymous review system, once the tricksters revealed their intent, only the editors whose names appear on the journal’s masthead suffered the sting of adverse public scrutiny. More finessed responses by the disgruntled threesome would have employed tactics such as persuasion, insight, engagement with the actual scholarship, and good sportsmanship (Bergstrom, 2018).[6]
Most importantly, the researchers didn't include a control group for their study. How can they claim to be outing "bad science" when their own methodology is so poor and fails to prove anything other than anecdota and suspicion?
One of the reviewers subject to the hoax wrote:
>"Anyways, I guess I could be more critical in the future, but I assumed a grad student had written a confusing paper and I tried to be constructive. I'm embarrassed that I took it as seriously as I did, I'm annoyed I wasted time writing a review, and I'm glad I rejected it."[8]
P.Z. Meyers put the situation best[7]:
>If you can find a bad article accepted for publication in a feminist journal, please do jump on it and tear it apart. That contributes to the strength of the discipline. Don’t write a bunch of bad articles of your own, which are clearly intended only to weaken the whole discipline and provide a set of easy, straw-man arguments that you can use to pretend you’re a smart guy.
And for the icing on the cake: Sokal himself isn't all that impressed with Boghossian's efforts[9]. A good thread discussing the hoax is on the social sciences subreddit[10].
Sounds like, unsurprisingly, the author hasn't spent that much time considering any of these questions themselves...they just have all the answers for those that do (thanks buddy!).
Some questions just deal with issues that are just not quantifiable (even the question he poses about immigration is just not completely quantifiable). And even questions that are quantifiable are often affected by beliefs (i.e. what happened to inflation in the 70s was a consequence, in part, of what people believed about the Philips Curve in the 1960s).
Perhaps more relevant: social science went through this phase nearly 100 years ago (in history, more than 100 years ago). And this issue was resolved often more than 50 years ago^. For example in history, EH Carr: people are not objective, arguments are often contingent but there are facts, present your argument, let your reader judge for themselves).
The most harmful thing is to claim that there can be objective truth about these issues. Economics has thrown itself against the rocks far too often. The trend towards this in history at mid/end of the 19th century produced some extremely unimportant work.
This can also tend towards quackery. I remember a biologist in my local politics dept (relatively prestigious) got a ton of funding because he believed he had found a way to spot the physical attributes of terrorists (srs, not joking, last I checked he had over $1m in funding from govt). Some people, like the author, are just unaware of the wider context. Less preaching about ways to "solve" social science, more listening (btw, in my experience all of the above applies to scientific research too...all research is contingent).
^ We first had the move towards (broadly) logical positivism/empiricism, then to post-modernism when that seemed ridiculous, and now to (imo) a reasonably healthy medium.
Very well put. Regarding history and its own past battles with this illusory search for "correctness" and mathematical-like precision, I'm really surprised that the author seems to have completely ignored Popper's pretty well known "The Poverty of Historicism" [1].
I'd also strongly recommend the author to also check Raymond Aron's "Introduction to the philosophy of history : an essay on the limits of historical objectivity." [2], a book first published in French in 1938 and translated into English in 1948.
I find it sad that some people still open up this discussion about trying to make history more "exact", more physics-like, I thought we had already proved that that is an impossible task.
“The truth is that such questions are open to empirical study and it is possible for social science to study them scientifically.”
Here is someone proclaiming to be a scientist, and then throwing out proclamations of ‘truth’ not backed by evidence. Goedel proved logically that any axiomatic system of information exchange can have truths that are not provable.
It’s possible human culture and society is too diverse to make claims of ‘absolute truth’ about. A statistical mechanics approach to why this might be true is telling. The more entropic states available, the more potential outcomes. That is why physics studying single atom or molecules is more ‘understandable’ than sociologists studying 10^35 if them (humans being a cloud of atoms).
>Here is someone proclaiming to be a scientist, and then throwing out proclamations of ‘truth’ not backed by evidence. Goedel proved logically that any axiomatic system of information exchange can have truths that are not provable.
The trouble with this argument is that Goedel's proof relies on systems which can model Peano arithmetic; in particular, it assumes that the system deals with numbers that are arbitrarily large. For example, Goodstein's theorem, the commonly-stated simplest unprovable theorem, depends on a function like this:
Most undecidable statements depend on things that look vaguely Diophantine, but sociology is rarely Diophantine. Rather, it tends be that approximately a solution is still kind of a solution.
> Goedel proved logically that any axiomatic system of information exchange can have truths that are not provable.
No, that is a misrepresentation of Goedel's results. A theorem that is undecidable (neither provable nor refutable) from a set of axioms cannot be 'truth' in the logical sense (because there are models of that set of axioms in which the theorem is true, and other models in which the theorem is false) - see Goedel's completeness theorem, which says that every truth is provable (and vice versa).
Goedel's incompleteness theorems can be understood on the semantic level as the mathematical structure of natural numbers cannot be characterized by a sane set of axioms, so any such attempt (e.g. peano axioms) that describes natural numbers also describes a different mathematical structure (a nonstandard model of arithmetic) and there exists a theorem that is true in one and false in the other model (so that theorem is undecidable).
Goedel is not really relevant to this. In the same way that insolubility of the halting problem doesn't prevent this message getting from my phone to hn to your screen. Its s theoretical limit on what can be computed but we're nowhere near running up against it.
(Sure, someone's phone will crash now, but they can still get back here somehow if they're that bothered).
"We hold these truths to be self-evident, that all men are created equal..."
All men are not equal, never have been and probably never will be. So why is that line so famous? Why has it influenced the course of history? Not just in the US. What that sentence, and its effects on history show us, is when people are faced with the Unprovable, they have a choice to sit back, do nothing and accept it OR decide what they want the truth to be. Unsurprisingly its always the latter group that makes change happen. The rest just fall asleep reading Goedel.
[+] [-] anarbadalov|6 years ago|reply
"Like medicine, social science is subjective. And it is also normative. We have a stake not just in knowing how things are but also in using this knowledge to make things the way we think they should be. We study voting behavior in the interest of preserving democratic values. We study the relationship between inflation and unemployment in order to mitigate the next recession. Yet unlike medicine, so far social scientists have not proven to be very effective in finding a way to wall off positive inquiry from normative expectations, which leads to the problem that instead of acquiring objective knowledge we may only be indulging in confirmation bias and wishful thinking."
[+] [-] jackcosgrove|6 years ago|reply
There's not much we can do about the first problem except slowly accrete knowledge through retrospective study of actual events, and maybe fill in some gaps with simulation.
The second problem is more solvable. I do believe that there are actually laws in the social sciences, but that they are far fewer in number than the phenomena people want to study. Many social phenomena are simply artifacts of existing systems, and may shed light on behavior in a particular case but are not generalizable beyond that system.
The law of supply and demand is a good example of a law that is universal, in many ways because it lies at the junction of physical systems and social systems. The physical aspects of supply and demand, for example the amount of arable land nearby as well as the caloric requirements of a population, can be measured well because of our physical knowledge and provide a decent jumping off place to expand the theory.
I disagree that some things are not quantifiable. Everything can be counted, from the number of neural connections in our brains to the number of widgets that a factory can produce. As usual, the limiting factor in science is not a lack of theory but a lack of instrumentation and thus data. I am basically an optimist about the social sciences as our measurement abilities continue to grow. However, we are in the dark ages regarding the tools we have to collect data.
[+] [-] AlotOfReading|6 years ago|reply
[1] If you don't understand why that's an inherent problem, ask yourself whether a white male like myself would get the same results doing research on menstruation in Kabul as an Afghani woman would, or an Arab doing sentiment polls immediately after 9/11.
[+] [-] nradov|6 years ago|reply
[+] [-] choeger|6 years ago|reply
Wait. Do I understand you correctly that an objective approach failed and your conclusion is to be more subjective?
[+] [-] gridlockd|6 years ago|reply
And the result of that is... what exactly?
[+] [-] abdullahkhalids|6 years ago|reply
Can you cite at least one paper from this series?
[+] [-] Mountain_Skies|6 years ago|reply
[+] [-] adamsea|6 years ago|reply
Social sciences is nowhere near the top of this list.
https://nsf.gov/statistics/2018/nsb20181/report/sections/aca...
Overall my understanding is that it is harder to be an academic in almost any field these days, but especially the humanities - social sciences are an in-between zone, but I doubt the universities cutting language and history programs consider the social sciences more 'science' than 'humanities.
Seems to me that there is a stigma around social sciences - both for legitimate reasons, and because, like anything which shines a light on the uncomfortable (systemic bias, racism, capitalism, etc) people get defensive and afraid and reactive and concoct conspiracy theories.
[+] [-] hliyan|6 years ago|reply
I wonder whether questionnaires should be replaced by more objective metrics, such as heart rate, pupil dilation, blood hormone levels, EEGs of the participants.
The other problem I see is the lack of predictive models in the social sciences, especially psychology. The type of models we have today are akin to 'celestial spheres' in ancient physics and the boiler theory of fever in medieval medicine (which led to blood-letting).
[+] [-] lokisotube|6 years ago|reply
People do use all the things you mention, and they're certainly useful, but also have limitations.
If you want to know how someone feels about liberal politics, for example (I'll let you pick the US or Britain), it's far easier and more direct to ask them than to try to infer it from heart rate responses etc.
I'm not saying those other things are useless, only that there's a reason Likert scales continue to be used so much.
Some food for thought from the other side of the coin:
https://aeon.co/essays/the-blind-spot-of-science-is-the-negl...
In any event, my complaint about the piece is that it picks on the social sciences when biomedicine is ripe with corruption and irreproducibility itself. There's a kind of bullying that occurs with this; biomedicine is rife with problems so it takes them out on a scapegoat. (The social sciences do have many problems, but many of them apply equally well to other fields.)
[+] [-] RyJones|6 years ago|reply
[+] [-] narrator|6 years ago|reply
Who comes up with the narrative to push though, and why?
[+] [-] chki|6 years ago|reply
That's a very absolute statement. The social scientists do everything for Grant money. Do you have some sort of data to back that up? I mean in a way it is true that research can only be conducted if there is Grant money but that should be true for all of the sciences.
>Foundations don't give grant money to figure out the truth. They get grant money to get someone to make some scientific sounding propaganda that supports whatever narrative they're pushing.
This might be true sometimes but do you think it is true for the majority of research? Not all foundations even have a "narrative" as far as I can tell.
>Who comes up with the narrative to push though, and why?
Isn't that a weirdly open question to ask at the end of a very assertive statement?
[+] [-] ppod|6 years ago|reply
Without doubt, in economics and political science at least the problem is that the research questions are infinitely more complex than in 19C medicine; not that the methods are not quantitatively rigorous. The questions are obviously also of a more normative and moral nature than in medicine.
[+] [-] swebs|6 years ago|reply
https://en.wikipedia.org/wiki/Science_wars
[+] [-] ralusek|6 years ago|reply
[+] [-] naasking|6 years ago|reply
[+] [-] danieltillett|6 years ago|reply
The social sciences (along with most of the humanities) appear to be beyond saving at this point and it might be a good idea to make a hard break with the past and start again with people trained outside the field.
[+] [-] specialist|6 years ago|reply
https://www.youtube.com/watch?v=yi3erdgVVTw
Just another form of social engineering. Haha, we compiled footage from a bunch of randos to show how stupid people. Haha, aren't we smart!
It's the academic equivalent of tricking someone into walking into a punch.
[+] [-] claudiawerner|6 years ago|reply
The "hoax" they pulled off doesn't seem to be showing what they say it does, at least not to the degree they suggest; from here[2]:
>Let’s analyze the hoax a bit more carefully. The team wrote up 21 bogus papers altogether. (The essay starts by saying there were only 20; according to Lindsay, that’s because two of the papers were largely similar to one another.) Of those 21, two-thirds never were accepted for publication. The Areo essay dwells on several papers that had been rejected outright, including one suggesting that white students should be enchained for the sake of pedagogy, and another proposing that self-pleasure could be a form of violence against women. They take it as a sign of intellectual decay that such papers managed to elicit respectful feedback from reviewers, even short of publication.
Academics warn against doing what Boghossian and friends did for their own good[5].
>The hoax was cruel; it sought to discredit targeted journals by setting a trap that exploited the scholarly predispositions of their editors and reviewers. Moreover, because of the anonymous review system, once the tricksters revealed their intent, only the editors whose names appear on the journal’s masthead suffered the sting of adverse public scrutiny. More finessed responses by the disgruntled threesome would have employed tactics such as persuasion, insight, engagement with the actual scholarship, and good sportsmanship (Bergstrom, 2018).[6]
Most importantly, the researchers didn't include a control group for their study. How can they claim to be outing "bad science" when their own methodology is so poor and fails to prove anything other than anecdota and suspicion?
One of the reviewers subject to the hoax wrote:
>"Anyways, I guess I could be more critical in the future, but I assumed a grad student had written a confusing paper and I tried to be constructive. I'm embarrassed that I took it as seriously as I did, I'm annoyed I wasted time writing a review, and I'm glad I rejected it."[8]
P.Z. Meyers put the situation best[7]:
>If you can find a bad article accepted for publication in a feminist journal, please do jump on it and tear it apart. That contributes to the strength of the discipline. Don’t write a bunch of bad articles of your own, which are clearly intended only to weaken the whole discipline and provide a set of easy, straw-man arguments that you can use to pretend you’re a smart guy.
And for the icing on the cake: Sokal himself isn't all that impressed with Boghossian's efforts[9]. A good thread discussing the hoax is on the social sciences subreddit[10].
[0] https://www.newscientist.com/article/dn17288-crap-paper-acce...
[1] https://i.redd.it/qsi6i5rbv3q11.png
[2] https://slate.com/technology/2018/10/grievance-studies-hoax-...
[3] https://platofootnote.wordpress.com/2017/05/24/an-embarrassi...
[4] https://www.3quarksdaily.com/3quarksdaily/2018/10/bad-argume...
[5] https://www.sciencedirect.com/science/article/abs/pii/S03783...
[6] https://journals.sagepub.com/doi/full/10.1177/14733250198338...
[7] https://freethoughtblogs.com/pharyngula/2018/10/03/give-it-a...
[8] https://twitter.com/dwschieber/status/1047497301021798400
[9] https://www.chronicle.com/article/What-the-Conceptual/240344
[10] https://www.reddit.com/r/AskSocialScience/comments/9noxmp/is...
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] hogFeast|6 years ago|reply
Some questions just deal with issues that are just not quantifiable (even the question he poses about immigration is just not completely quantifiable). And even questions that are quantifiable are often affected by beliefs (i.e. what happened to inflation in the 70s was a consequence, in part, of what people believed about the Philips Curve in the 1960s).
Perhaps more relevant: social science went through this phase nearly 100 years ago (in history, more than 100 years ago). And this issue was resolved often more than 50 years ago^. For example in history, EH Carr: people are not objective, arguments are often contingent but there are facts, present your argument, let your reader judge for themselves).
The most harmful thing is to claim that there can be objective truth about these issues. Economics has thrown itself against the rocks far too often. The trend towards this in history at mid/end of the 19th century produced some extremely unimportant work.
This can also tend towards quackery. I remember a biologist in my local politics dept (relatively prestigious) got a ton of funding because he believed he had found a way to spot the physical attributes of terrorists (srs, not joking, last I checked he had over $1m in funding from govt). Some people, like the author, are just unaware of the wider context. Less preaching about ways to "solve" social science, more listening (btw, in my experience all of the above applies to scientific research too...all research is contingent).
^ We first had the move towards (broadly) logical positivism/empiricism, then to post-modernism when that seemed ridiculous, and now to (imo) a reasonably healthy medium.
[+] [-] paganel|6 years ago|reply
I'd also strongly recommend the author to also check Raymond Aron's "Introduction to the philosophy of history : an essay on the limits of historical objectivity." [2], a book first published in French in 1938 and translated into English in 1948.
I find it sad that some people still open up this discussion about trying to make history more "exact", more physics-like, I thought we had already proved that that is an impossible task.
[1] https://en.wikipedia.org/wiki/The_Poverty_of_Historicism
[2] https://www.amazon.com/Introduction-philosophy-history-histo...
[+] [-] JulianMorrison|6 years ago|reply
Isn't this literally phrenology, risen from its grave?
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] tus87|6 years ago|reply
[deleted]
[+] [-] classicsnoot|6 years ago|reply
[deleted]
[+] [-] javajosh|6 years ago|reply
[deleted]
[+] [-] devdas|6 years ago|reply
[deleted]
[+] [-] paganel|6 years ago|reply
[+] [-] mensetmanusman|6 years ago|reply
“The truth is that such questions are open to empirical study and it is possible for social science to study them scientifically.”
Here is someone proclaiming to be a scientist, and then throwing out proclamations of ‘truth’ not backed by evidence. Goedel proved logically that any axiomatic system of information exchange can have truths that are not provable.
It’s possible human culture and society is too diverse to make claims of ‘absolute truth’ about. A statistical mechanics approach to why this might be true is telling. The more entropic states available, the more potential outcomes. That is why physics studying single atom or molecules is more ‘understandable’ than sociologists studying 10^35 if them (humans being a cloud of atoms).
[+] [-] scythe|6 years ago|reply
The trouble with this argument is that Goedel's proof relies on systems which can model Peano arithmetic; in particular, it assumes that the system deals with numbers that are arbitrarily large. For example, Goodstein's theorem, the commonly-stated simplest unprovable theorem, depends on a function like this:
f(2) = 19, f(3) = 7.6 × 10^12, f(4) = 1.3 × 10^154, ...
Most numbers in sociology are not so large. The number of possible subsets of the human population is between f(7) and f(8).
Additionally, theorems about discrete systems do not always apply to continuous systems. For example, the theory of real closed fields is decidable:
http://en.wikipedia.org/wiki/Real_closed_field
Most undecidable statements depend on things that look vaguely Diophantine, but sociology is rarely Diophantine. Rather, it tends be that approximately a solution is still kind of a solution.
[+] [-] zajio1am|6 years ago|reply
No, that is a misrepresentation of Goedel's results. A theorem that is undecidable (neither provable nor refutable) from a set of axioms cannot be 'truth' in the logical sense (because there are models of that set of axioms in which the theorem is true, and other models in which the theorem is false) - see Goedel's completeness theorem, which says that every truth is provable (and vice versa).
Goedel's incompleteness theorems can be understood on the semantic level as the mathematical structure of natural numbers cannot be characterized by a sane set of axioms, so any such attempt (e.g. peano axioms) that describes natural numbers also describes a different mathematical structure (a nonstandard model of arithmetic) and there exists a theorem that is true in one and false in the other model (so that theorem is undecidable).
[+] [-] sideshowb|6 years ago|reply
(Sure, someone's phone will crash now, but they can still get back here somehow if they're that bothered).
[+] [-] kodz4|6 years ago|reply
All men are not equal, never have been and probably never will be. So why is that line so famous? Why has it influenced the course of history? Not just in the US. What that sentence, and its effects on history show us, is when people are faced with the Unprovable, they have a choice to sit back, do nothing and accept it OR decide what they want the truth to be. Unsurprisingly its always the latter group that makes change happen. The rest just fall asleep reading Goedel.
[+] [-] thrwayxyz|6 years ago|reply