top | item 46858865

(no title)

alecbz | 27 days ago

Trying to get LLMs to understand bugs that I myself am stuck on has had an approximately 0% success rate for me.

They're energetic "interns" that can churn out a lot of stuff fast but seem to struggle a lot with critical thinking.

discuss

order

jama211|26 days ago

I’m not going to take bitter advice from someone who either hasn’t used them in a long time, or is terribly bad at using them. Especially as it seems like you hate them so much.

I don’t particularly like them or dislike them, they’re just tools. But saying they never work for bug fixing is just ridiculous. Feels more like you just wanted an excuse to get on your soapbox.

alecbz|26 days ago

It's not that they can't fix bugs at all, but I find that if I've already attempted to debug something and hit a wall, they're rarely able to help further.

chrisjj|25 days ago

> I’m not going to take bitter advice from someone...

He didn't give advice. He reported his personal experience and conclusion.

chrisjj|27 days ago

> seem to struggle a lot with critical thinking.

It is an illusion arising from anthropomorphisation. They aren't thinking at all. They are just parotting the output of thinking that has long gone.

alecbz|27 days ago

This feels too strong IMO.

Just focusing on the outputs we can observe, LLMs clearly seem to be able to "think" correctly on some small problems that feel generalized from examples its been trained on (as opposed to pure regurgitation).

Objecting to this on some kind of philosophical grounds of "being able to generalize from existing patterns isn't the same as thinking" feels like a distinction without a difference. If LLMs were better at solving complex problems I would absolutely describe what they're doing as "thinking". They just aren't, in practice.