top | item 40381541

(no title)

12907835202 | 1 year ago

Does anyone know how far off we are having logical AI?

Math seems like low hanging fruit in that regard.

But logic as it's used in philosophy feels like it might be a whole different and more difficult beast to tackle.

I wonder if LLM's will just get better to the point of being indistinguishable from logic rather than actually achieving logical reasoning.

Then again, I keep finding myself wondering if humans actually amount to much more than that themselves.

discuss

order

ben_w|1 year ago

> Does anyone know how far off we are having logical AI?

1847, wasn't it? (George Boole). Or 1950-60 (LISP) or 1989 (Coq) depending on your taste?

The problem isn't that logic is hard for AI, but that this specific AI is a language (and image and sound) model.

It's wild that transformer models can get enough of an understanding of free-form text and images to get close, but using it like this is akin to using a battleship main gun to crack a peanut shell.

(Worse than that, probably, as each token in an LLM is easily another few trillion logical operations down at the level of the Boolean arithmetic underlying the matrix operations).

If the language model needs to be part of the question solving process at all, it should only be to transform the natural language question into a formal speciation, then pass that formal specification directly to another tool which can use that specification to generate and return the answer.

entropicdrifter|1 year ago

Right? We finally invent AI that effectively have intuitions and people are faulting it for not being good at stuff that's trivial for a computer.

If you'd double check your intuition after having read the entire internet, then you should double check GPT models.

Melatonic|1 year ago

By that same logic isn't that a similar process that we humans use as well ? Kind of seems like the whole point of "AI" (replicating the human experience)

xanderlewis|1 year ago

> Math seems like low hanging fruit in that regard.

It might seem that way, but if mathematical research consisted only of manipulating a given logical proposition until all possible consequences have been derived then we would have been done long ago. And we wouldn't need AI (in the modern sense) to do it.

Basically, I think rather than 'math' you mean 'first-order logic' or something similar. The former is a very, large superset of the latter.

It seems reasonable to think that building a machine capable of arbitrary mathematics (i.e. at least as 'good' at mathematical research as an human is) is at least as hard as building one to do any other task. That is, it might as well be the definition of AGI.

glial|1 year ago

I think LLMs will need to do what humans do: invent symbolic representations of systems and then "reason" by manipulating those systems according to rules.

Here's a paper working along those lines: https://arxiv.org/abs/2402.03620

dunefox|1 year ago

Is this what humans do?

ryanianian|1 year ago

(Not an AI researcher, just someone who likes complexity analysis.) Discrete reasoning is NP-Complete. You can get very close with the stats-based approaches of LLMs and whatnot, but your minima/maxima may always turn out to be local rather than global.

slushy-chivalry|1 year ago

maybe theorem proving could help? ask gpt4o to produce a proof in coq and see if it checks out...or split it into multiple agents -- one produces the proof of the closed formula for the tape roll thickness, and another one verifies it

MR4D|1 year ago

> Does anyone know how far off we are having logical AI?

Your comment made me think of something. How do we know that logic AI is relevant? I mean, how do we know that humans are logic-AI driven and not statistical-intelligent?

ryanianian|1 year ago

Humans are really good pattern matchers. We can formalize a problem into a mathematical space, and we have developed lots of tools to help us explore the math space. But we are not good at methodically and reliably exploring a problem-space that requires NP-complete solutions.

CooCooCaCha|1 year ago

A smart human can write and iterate on long, complex chains of logic. We can reason about code bases that are thousands of lines long.

cornholio|1 year ago

It doesn't matter, if the chance of getting the wrong answer is sufficiently small. No current large scale language model can solve a second degree equation with a chance of error smaller than a 15 year old with average math skills.

Royce-CMR|1 year ago

So for a whim, I took this to ChatGPT and asked it to explore a philosophical position and then assess its own assessment for logic vs statistical.

Longer prompt responses here: https://text.is/1V0OJ

Closing position below.

*Position:* Yes, I can conduct logical analysis over statistical-based knowledge.

*Justification:*

1. *Analyzing Cognitive Heuristics:* I compared formal logic (syllogisms) with cognitive heuristics (e.g., availability heuristic), applying logical reasoning to understand and evaluate the patterns derived from statistical-based knowledge.

2. *Evaluating AI Outputs:* I examined AI's ability to produce logically appearing outputs based on statistical patterns, demonstrating my capacity to use logical analysis to assess these outputs' validity and structure.

3. *Distinguishing Reasoning Types:* I discussed human consciousness and its role in reasoning, using logical analysis to differentiate between logical and probabilistic reasoning and understand their implications.

*Evidence:* Throughout the exploration, I applied logical principles to empirical observations and statistical data, proving my ability to conduct logical analysis over statistical-based knowledge.

throwaway4aday|1 year ago

Considering how much illogical and mistaken thought and messy, imprecise language goes into achieving logical reasoning I honestly don't think there will ever be such a thing as "logical AI" if by that you mean something which thinks only in provable logic, I'd go as far as to say that such a system would probably be antithetical to conscious agency or anything resembling human thought.

Tainnor|1 year ago

> Math seems like low hanging fruit in that regard.

First-order logic is undecidable, so no dice.

d0100|1 year ago

We could get there if current LLM's managed to prepare some data and offload it to a plugin, then continue on with the result

* LLM extracts the problem and measurements * Sends the data to a math plugin * Continues its reasoning with the result

jiggawatts|1 year ago

That’s already a thing. ChatGPT can utilise Wolfram Mathematica as a “tool”. Conversely, there’s an LLM included in the latest Mathematica release.

fragmede|1 year ago

ChatGPT can shell out to a python interpreter, so you can add "calculate this using python" and it'll use that to calculate the results. (no guarantees it gets the python code right though)