(no title)
12907835202 | 1 year ago
Math seems like low hanging fruit in that regard.
But logic as it's used in philosophy feels like it might be a whole different and more difficult beast to tackle.
I wonder if LLM's will just get better to the point of being indistinguishable from logic rather than actually achieving logical reasoning.
Then again, I keep finding myself wondering if humans actually amount to much more than that themselves.
ben_w|1 year ago
1847, wasn't it? (George Boole). Or 1950-60 (LISP) or 1989 (Coq) depending on your taste?
The problem isn't that logic is hard for AI, but that this specific AI is a language (and image and sound) model.
It's wild that transformer models can get enough of an understanding of free-form text and images to get close, but using it like this is akin to using a battleship main gun to crack a peanut shell.
(Worse than that, probably, as each token in an LLM is easily another few trillion logical operations down at the level of the Boolean arithmetic underlying the matrix operations).
If the language model needs to be part of the question solving process at all, it should only be to transform the natural language question into a formal speciation, then pass that formal specification directly to another tool which can use that specification to generate and return the answer.
entropicdrifter|1 year ago
If you'd double check your intuition after having read the entire internet, then you should double check GPT models.
Melatonic|1 year ago
xanderlewis|1 year ago
It might seem that way, but if mathematical research consisted only of manipulating a given logical proposition until all possible consequences have been derived then we would have been done long ago. And we wouldn't need AI (in the modern sense) to do it.
Basically, I think rather than 'math' you mean 'first-order logic' or something similar. The former is a very, large superset of the latter.
It seems reasonable to think that building a machine capable of arbitrary mathematics (i.e. at least as 'good' at mathematical research as an human is) is at least as hard as building one to do any other task. That is, it might as well be the definition of AGI.
glial|1 year ago
Here's a paper working along those lines: https://arxiv.org/abs/2402.03620
dunefox|1 year ago
ryanianian|1 year ago
slushy-chivalry|1 year ago
MR4D|1 year ago
Your comment made me think of something. How do we know that logic AI is relevant? I mean, how do we know that humans are logic-AI driven and not statistical-intelligent?
ryanianian|1 year ago
CooCooCaCha|1 year ago
cornholio|1 year ago
Royce-CMR|1 year ago
Longer prompt responses here: https://text.is/1V0OJ
Closing position below.
*Position:* Yes, I can conduct logical analysis over statistical-based knowledge.
*Justification:*
1. *Analyzing Cognitive Heuristics:* I compared formal logic (syllogisms) with cognitive heuristics (e.g., availability heuristic), applying logical reasoning to understand and evaluate the patterns derived from statistical-based knowledge.
2. *Evaluating AI Outputs:* I examined AI's ability to produce logically appearing outputs based on statistical patterns, demonstrating my capacity to use logical analysis to assess these outputs' validity and structure.
3. *Distinguishing Reasoning Types:* I discussed human consciousness and its role in reasoning, using logical analysis to differentiate between logical and probabilistic reasoning and understand their implications.
*Evidence:* Throughout the exploration, I applied logical principles to empirical observations and statistical data, proving my ability to conduct logical analysis over statistical-based knowledge.
throwaway4aday|1 year ago
Tainnor|1 year ago
First-order logic is undecidable, so no dice.
d0100|1 year ago
* LLM extracts the problem and measurements * Sends the data to a math plugin * Continues its reasoning with the result
jiggawatts|1 year ago
fragmede|1 year ago