Leibnitz followed very closely in the footsteps of the Neoplatonists and he was what you'd call a rationalist's rationalist. He would be later rebuked by Hume (the famous is-ought problem made moral—ought—problems fundamentally distinct from rational—is—problems) and Kant would put the nail in the coffin of the rationalist-empiricist debate in the next century (with his earth-shattering Critique of Pure Reason). And if that wasn't enough, as the logical positivists of the early 20th century were still clinging to some form of mathematical completeness, Gödel proved that the project dreamed up by Leibnitz (and more distantly by Plato) was a dead end, to Wittgenstein’s, Russell’s and many others' dismay. Some things (even true ones!) are simply unprovable.
I love this story as it spans more than 2000 years, and even though the idea itself proved to be untenable, this search gave us the enlightenment, the industrial revolution, the computer age, and beyond.
>and Kant would put the nail in the coffin of the rationalist-empiricist debate in the next century (with his earth-shattering Critique of Pure Reason)
Kant was mostly convincing for himself (to whom he was a great fan of) and Kantians. His arguments were hardly definitive.
>Some things (even true ones!) are simply unprovable
Within the context of a system with certain algebraic properties.
Hmm. Do I understand your comment correctly in that thoughts should be either rational or not?
I think different kinds of thinking have their applications in different contexts. Gödels theorems are hardly ever relevant to most of mathematics and not in the least to computers (which are finite).
I also doubt that industrialism has anything to do with Leibniz or Hume. That part of history is most likely fuelled by greed for money, not for philosophical thought.
I wonder how his vector math was? Because it sounds like he had the conceptual underpinnings of the altar we’ve all been praying at lately.
It’s really humbling how a fundamentally simple algorithm interpreting a ridiculously complex and vast data set is capable of a simulacrum of thought.
That data set, of course, is the formalisation of human culture, taken from our works. It begs the question of whether our fundamental algorithm for parsing that data is really that much different. We cannot think without tokens, symbols.
We can be, and we can feel, but we cannot think without them. So, where is the intelligence, really? Is it in our heads, or in the data?
Is the data the computation, like an unfathomably vast choose-your-own-adventure book, or is the computation the data, like a one time pad decryption algorithm that creates a universe simulator from the chaotic arrangement of atoms in a beach full of sand?
Is this really “artificial” intelligence, at all? Or just the steam engine of human intellect?
He was born around the same time as the invention of analytical geometry (i.e. the idea that it is possible to do geometry using algebra), and "vector math" (or linear algebra) came several decades later.
Leibniz was envisioning something like Douglas Lenat's Cyc project. In his grant proposal to the French court, his first intended stage was to reduce all of human knowledge to propositional form. He estimated it would take about 5 years given a small clerical staff and a good library.
Mastering biological processes so that we can ensure that there is enough food for everyone (of a reasonable variety)[1] and that maybe the plants can do well and clean up our environment and make it nicer to live in?[2] Maybe to achieve long enough human lifespans that we can think of interstellar travel?[3]
1 - Hal Clements _Space Lash_, short story "Raindrop"
2 - L.E. Modesitt, Jr's The Forever Hero trilogy (as well as the "The Mechanic" from Space Lash)
I'd say Ian M. Banks Culture series [0]. Arguably also written in the 90ies but it draws a bit more of a positive picture. I'd love to live in an environment like that.
> Swift’s point was that language is not a formal system that represents human thought, but a messy and ambiguous form of expression.
Yep even the greatest minds are susceptible to false dichotomy. We now have linguistics and computer science, yet some aspects of thought remain forever intractably messy.
I sometimes wonder if Turing picked factorial as the example in his 1949 "Checking a Large Routine"[0] purely at hazard, or possibly as an homage to Leibniz, who was fond of them[1]?
Note that this article precedes both ChatGPT and GPT-3. When it was written, Leibniz' idea of a machine reasoning by manipulating symbols was still science fiction. Now it is very much reality.
Does ChatGPT actually manipulate symbols, or does it string together strings of characters based on what most frequently occurs next? I haven't seen anything that indicates actual working w/ symbols/logic/ideas.
Leibniz dreamed that we would be able to completrly determine if some claim was true or not. We can't do that for all claims, but at least in mathematics, we can prove with absolute certainty that some axioms will lead you to other claims. So at least a little piece of his dream is a reality.
[+] [-] dvt|2 years ago|reply
I love this story as it spans more than 2000 years, and even though the idea itself proved to be untenable, this search gave us the enlightenment, the industrial revolution, the computer age, and beyond.
[+] [-] coldtea|2 years ago|reply
Kant was mostly convincing for himself (to whom he was a great fan of) and Kantians. His arguments were hardly definitive.
>Some things (even true ones!) are simply unprovable
Within the context of a system with certain algebraic properties.
[+] [-] routerl|2 years ago|reply
More distant than that. It all comes from Euclid. Who, you know, was actually tremendously successful in that project.
[+] [-] thomasjv|2 years ago|reply
[+] [-] Reflecticon|2 years ago|reply
[+] [-] smokel|2 years ago|reply
I think different kinds of thinking have their applications in different contexts. Gödels theorems are hardly ever relevant to most of mathematics and not in the least to computers (which are finite).
I also doubt that industrialism has anything to do with Leibniz or Hume. That part of history is most likely fuelled by greed for money, not for philosophical thought.
[+] [-] K0balt|2 years ago|reply
It’s really humbling how a fundamentally simple algorithm interpreting a ridiculously complex and vast data set is capable of a simulacrum of thought.
That data set, of course, is the formalisation of human culture, taken from our works. It begs the question of whether our fundamental algorithm for parsing that data is really that much different. We cannot think without tokens, symbols.
We can be, and we can feel, but we cannot think without them. So, where is the intelligence, really? Is it in our heads, or in the data?
Is the data the computation, like an unfathomably vast choose-your-own-adventure book, or is the computation the data, like a one time pad decryption algorithm that creates a universe simulator from the chaotic arrangement of atoms in a beach full of sand?
Is this really “artificial” intelligence, at all? Or just the steam engine of human intellect?
[+] [-] routerl|2 years ago|reply
He was born around the same time as the invention of analytical geometry (i.e. the idea that it is possible to do geometry using algebra), and "vector math" (or linear algebra) came several decades later.
So, his vector math was non-existent.
[+] [-] divbzero|2 years ago|reply
[+] [-] chrisbrandow|2 years ago|reply
[+] [-] wrp|2 years ago|reply
[+] [-] johndhi|2 years ago|reply
I've been reading 70s and 80s sci Fi and loving all of the ideas of the future they had. I don't see them today but I don't know what to read.
[+] [-] WillAdams|2 years ago|reply
1 - Hal Clements _Space Lash_, short story "Raindrop"
2 - L.E. Modesitt, Jr's The Forever Hero trilogy (as well as the "The Mechanic" from Space Lash)
3 - Poul Anderson's _The Boat of a Million Years_
[+] [-] climb_stealth|2 years ago|reply
[0] https://en.m.wikipedia.org/wiki/Culture_series
[+] [-] gpderetta|2 years ago|reply
[+] [-] emmelaich|2 years ago|reply
https://www.cs.utexas.edu/users/EWD/transcriptions/EWD12xx/E...
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] sublinear|2 years ago|reply
Yep even the greatest minds are susceptible to false dichotomy. We now have linguistics and computer science, yet some aspects of thought remain forever intractably messy.
[+] [-] MichaelMoser123|2 years ago|reply
[+] [-] 082349872349872|2 years ago|reply
[0] https://turingarchive.kings.cam.ac.uk/publications-lectures-...
[1] see "Dissertatio de arte combinatoria...", 1666 https://ia800906.us.archive.org/BookReader/BookReaderImages....
[+] [-] cubefox|2 years ago|reply
Note that this article precedes both ChatGPT and GPT-3. When it was written, Leibniz' idea of a machine reasoning by manipulating symbols was still science fiction. Now it is very much reality.
[+] [-] WillAdams|2 years ago|reply
[+] [-] mensetmanusman|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] dwheeler|2 years ago|reply
[+] [-] anthk|2 years ago|reply
[+] [-] personlurking|2 years ago|reply
https://news.ycombinator.com/item?id=29210099 (2021, 50 comments)
[+] [-] moffkalast|2 years ago|reply
[+] [-] readyplayernull|2 years ago|reply