top | item 35518746

Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4

44 points| saurabh20n | 2 years ago |arxiv.org

60 comments

order

RayVR|2 years ago

This aligns well with my personal experience using gpt-4.

The model provides surprisingly good responses on topics which I know are readily available online while being potentially troublesome to find the exact information I want. I have even found it useful when I know there is a tool for what I want but can’t recall the jargon used to find it via Google. Simply describing the rough idea is enough to get the model to spit out the jargon I need.

However, the moment I ask a real question that goes beyond summarizing something which is covered thousands of times online, I am immediately let down.

Is this just a result of the foundation of the model being the world best autocompletion engine? My assessment is “yes” and I don’t believe that any of the modifications coming, like plugins, will fundamentally change this.

raydiatian|2 years ago

I have been thinking for a few weeks now that we need another term for large language models trained on colossal datasets: AGK, artificially generally/globally knowledgeable. It can mimic a likeness of problem solving because the corpus it was trained on is full of problem/solution pairs in the abstract. But task it with any novel problem solving challenge outside of its training that is of sufficient complexity and it will balk, thereby precluding it from being AGI, because humans are by nature problem solvers.

Furthermore, I just don’t feel like the transformer architecture is suited for problem solving. Like I may just be a charlatan but self attention over the space of words does not seem like it’s going to be enough, and praying it falls out in emergent behavior if we can just add more parameters is… unscientific-ish? Now, if you could figure out a way to do self-attention over the space of concepts? Maybe you’ve got something.

I feel like AlphaGo ideas and some variation on MCTS is more likely to produce a solid problem solving architecture.

rvz|2 years ago

> However, the moment I ask a real question that goes beyond summarizing something which is covered thousands of times online, I am immediately let down.

I'm very sure I said this from the start, against the ridiculous hype. Summarization of existing text is the *only* safe use case for LLMs. Anything else is asking for disappointment.

We have already seen it used as a search engine and it confidently hallucinates incorrect information. We have seen it pretend to be a medical professional or a replacement attorney or lawyer and it has outright regurgitated nonsensical and dangerous advice - making itself completely unreliable for that use-case especially since (deep) neural networks in general are still the same black-boxes, unable to explain and reason about their own decisions; making them unsuitable for high risk applications and use-cases.

As for writing code, despite what the hype-squad tells you both GPT-4 and ChatGPT the ground reality is that it generates broken code from the start and cannot reason why it did that in the first place. Non-programmers wouldn't question its output where as an experienced professional would catch its errors immediately.

Due to its untrustworthiness, it means than now programmers have to check and review the output that has been generated by GPT-4 and ChatGPT every-time in their projects than before.

The AI LLM hype has only further exposed its limitations.

pottspotts|2 years ago

For a significant number of software developers, GPT and Github's Copilot have replaced StackOverflow, and even Googling more generally. It is more than an autocomplete, it is the best resource for software development by far, IMO. It's a tutor that's an expert in virtually every topic.

dbrueck|2 years ago

Similarly, when I think of ChatGPT as a really cool and advanced search engine frontend, its behavior - including its limitations and its failures - make the most sense to me.

seba_dos1|2 years ago

> I am immediately let down

Why? I'm not sure how could you expect anything else in the first place.

Closi|2 years ago

I think one main failure in the framing of these papers (and discussion of LLMs more broadly) is that the abstract says that GPT4 ‘struggles’ with logical reasoning:

> ChatGPT and GPT-4 do relatively well on well-known datasets […] however, the performance drops significantly when handling newly released and out-of-distribution [where] Logical reasoning remains challenging for ChatGPT and GPT-4

But reading the paper the challenges it is failing on are ones that I wager the average human would fail on too (at least a good portion of the time).

The paper might strictly be accurate, but I think we should try and bring these papers back to a real-world context - which is that it’s probably operating above your average human at these tasks.

Is superhuman/genius-level capability really required before we say the LLMs are any good?

(I see this view on HN too - statements like ‘LLMs can’t create novel maths theorems!’ as an argument that LLMs aren’t good at reasoning, disregarding that most humans today can’t find novel/undiscovered maths theorems)

svachalek|2 years ago

If you really force it to reason, rather than regurgitate arguments from its training set, you will find it is nowhere near the genius line. Make up some rules and have it try to answer questions according to the rules. In my experiments I feel it's something like a 4 or 5 year old child both in its logical limitations and penchant for distraction.

However it's important to note one VERY important thing -- this is not a system that is designed to reason! At all, as far as I know. That just fell out of its ability for language somehow. So to just accidentally be able to reason like a 4 year old human (which are vastly clever compared to the adult of any other animal species I'm aware of) is incredibly impressive and I think the next obvious step is to couple this tech together with some classic computing, which has far exceeded human capabilities for logic and reason for decades already. If ChatGPT has some secondary system for reasoning and just uses the LLM for setting up problems and reading results, I think it could reach superhuman levels of reasoning quite easily.

pottspotts|2 years ago

The goal posts for AI are moving quickly, and in my mind, a lot of the criticism os too shallow.

People want it to perform better than any expert human at any possible subject before it's considered "real AI". It isn't enough for critics for it to be better than the average person at virtually everything its put to the test on.

It seems like there is some resentment and almost anger at this technology, particularly with the artistic AIs like Midjourney. I can understand that more readily, but what's the real beef with ChatGPT?

skybrian|2 years ago

You're framing this as if there were a single yes-or-no question that we should all agree on. (Are the LLM's "any good?")

But in real-world contexts, there are some tasks that just about anyone could do, others where "average" human performance isn't good enough and you need to hire an expert, and also some jobs that can only be done by machine.

So it seems like the bar should be set based on what you think is necessary for whatever practical application you have in mind?

If it's just a game, beating an average chess player, someone who is really good, or the best in the world are different milestones. And for chess there is an ELO ranking system that lets you answer this more precisely, too.

A paper about how well chatbots do on some reasoning tests can't answer this for you.

TOMDM|2 years ago

I think a lot of these LLM benchmarks should include a human avg, otherwise I don't really have a frame of reference other than personal experience with the models.

jillesvangurp|2 years ago

People aren't that good at logic either. So, gpt-4 not being great at this is maybe not that surprising.

Probably the best feature of gpt-4 is the ability to use tools. For example, it may not be that good at calculating things. But it can use a calculator. And if you think about it, a lot of people (including mathematicians) aren't actually that good at calculating either. We all learn it in school and then we forget much of it. That's why we have calculators. It's not a big deal.

Gpt-4 is more than capable of knowing the best tool for the job. Figuring out how to use it isn't that hard. You can actually ask it "what's the best tool for X", get a usable response, and then ask a follow up question to produce a script in the language of your choosing that demonstrates how to use it, complete with unit tests. Not a hypothetical, I've been doing that in the past few weeks and I've been getting some usable results.

And that's put me in a mind of wondering what will happen once we start networking all these specialized AIs and tools together. It might not be able to do everything by itself but it can get quite far figuring out requirements and turning those into running code. It's not that big of a leap from answering questions about how to do things to actually building programs that do things.

causality0|2 years ago

They're good in "memory" reasoning but terrible in deductive reasoning. Like if you say there's a sign in front of a door saying "push" it will tell you you need to push the door, but if you say there was a powerful wind and you see a sign saying "pull" laying on the ground on the other side of a glass door it has no idea if you should push or pull.

cjbprime|2 years ago

I guess I'm with the LLM on this one, since I can't follow your example. Did the sign flip over while it was falling? Did the sign fall towards or away from the glass door that I am on the other side of? Where are the doorhandles?

Can you write this example in a way that's more comprehensible to humans, and then we can ask GPT-4 about it?

progrus|2 years ago

Will we ever get apologies from the AI-Foom crew for losing their marbles and riling people up about the word calculator?

micromacrofoot|2 years ago

word calculator is a more impressive title than I’d grant some people

dmz73|2 years ago

LLMs are just programs that can produce human-like language output based on human-like language input and that calling them AI of any kind is greatly overstating their capabilities. There is no "reasoning" or "understanding" here, there is just a giant ball of mud full of auto-generated if-then-else like code with calls to random number function peppered around.

The two main problems I see with attributing AI to these programs are: 1. People will assume they are receiving intelligent response they can rely on without sanity checking. This is different than receiving the same response from other people because one learns to know who to trust and when. You can never trust these programs. 2. If/when real AI emerges it will be treated poorly because most people will assume it is the same "brainless" AI they were sold so many times before. In that respect the treatment of real AI will be equivalent to child abuse or slavery and will result in another giant black mark in human history.