top | item 45950009

(no title)

jbritton | 3 months ago

I have been spent many hours with them on coding tasks. As things currently stand once context or complexity reaches a certain point they become completely incapable of solving problems and that point occurs on very simple things. They appear completely brain dead at times although they are magnificent liars at making you think they understand the problem. Although, I recently got chatGPT 5 to solve a problem in a couple hours that Claude Sonnet 4 was simply never going to solve. So they are improving. I don’t know the limits. I’m more hopeful that a feedback loop with specialized agents will take things much further. I’m extremely skeptical that getting bigger context windows and larger models is going to get us reasoning. The skepticism comes from observations. Clearly no one knows how thinking actually works. I don’t know how to address the evolve part. LLMs don’t directly mutate and have selective pressure like living organisms. Maybe a simulation could be made to do that.

discuss

order

jbritton|3 months ago

Gotham Chess has done chat bot chess championships. The chat bots make a few good moves, and then begin making illegal moves, randomly removing or adding pieces, and completing ignoring obvious threats and attacks. It is so obvious that the pattern matching is not resulting in reasoning. Another example is Towers of Hanoi. An LLM can write code to solve it, because that’s an easy pattern match. But it can’t write out the steps beyond a 3 disk puzzle. It has no understanding of the recursive nature of the problem.

fl7305|3 months ago

>> the presumption that reasoning skills cannot evolve

> I don’t know how to address the evolve part. LLMs don’t directly mutate and have selective pressure like living organisms.

Sorry, that was poorly worded. I meant "can reasoning skills not be evolved through the neural net training phase?"

Sure, once you deploy an LLM, it does not evolve any more.

But let's say you have a person Tom with 5-minute short term memory loss, meaning he can't ever remember more than 5 minutes back. His reasoning skills are completely static, just based on his previous education before the memory loss accident, and the last 5 minutes.

Is "5-minute Tom" incapable of reasoning because he can't learn new things?

> They appear completely brain dead at times

Yes, definitely. But they also manage to produce what looks like actual reasoning in other cases. Meaning, "reasoning, not pattern matching".

So if a thing can reason at some times and in some cases, but not in other, what do we call that?

An LLM is a lot like a regular CPU. A CPU basically just operates step by step, where it takes inputs, a state memory, and stored read-only data, puts those into combinatorical logic to calculate new outputs and updates to the state memory.

An LLM does the same thing. It runs step by step, takes the user input+its own previous output tokens and stored readonly data, puts those into a huge neural network to perform processing to generate the next output token.

The "state memory" in an LLM (=the context window) is a lot more limited than a CPU RAM+disk, but it's still a state memory.

So I don't have a problem imagining that an LLM can perform some level of reasoning. Limited and flawed for sure, but still a different creature than "pattern matching".

whattheheckheck|3 months ago

Can you post your experiments please?

jbritton|3 months ago

I don’t have a log of work that I can post.

handoflixue|3 months ago

Would you agree that many humans, especially younger ones, are also incapable of reasoning?

addaon|3 months ago

I think it’s completely uncontroversial that younger humans are incapable of reasoning, no? The only area for discussion is at which age (if any) this changes for an individual.