top | item 44432628

(no title)

icoder | 8 months ago

This nicely describes where we're at with LLM's as I see it: they are 'fancy' enough to be able to write code yet at the same time they can't be trusted to do stuff which can be solved with a simple hook.

I feel that currently improvement mostly comes from slapping what to me feels like workarounds on top of something that very well may be a local maximum.

discuss

order

oefrha|8 months ago

> they are 'fancy' enough to be able to write code yet at the same time they can't be trusted to do stuff which can be solved with a simple hook.

Humans are fancy enough to be able to write code yet at the same time they can’t be trusted to do stuff which can be solved with a simple hook, like a simple formatter or linter. That’s why we still run those on CI. This is a meaningless statement.

RobertDeNiro|8 months ago

One is a machine the other one is not. People have to stop comparing LLMs to humans. Would you hold a car to human standards?

ramoz|8 months ago

Claude Code is an agent, not an LLM. Literally this is software that was released 4mo ago. lol.

1y ago - No provider was training LLMs in an environment modeled for agentic behavior - ie in conjunction with software design of an integrated utility.

'slapped on workaround' is a very lazy way to describe this innovation.

koakuma-chan|8 months ago

> Literally this is software that was released 4mo ago.

Feels like ages

Marazan|8 months ago

Someone described LLMs in the coding space as stone soup. So much stuff is being created around then to make them work better that at some point it feels like you'll be able to remove the LLM part of the equation

samrus|8 months ago

We cant deny the LLM has utility. You cant eat the stone but the LLM can implement design patterns for example.

I think this insistance on near autonomous agents is setting the bar too high, which wouldnt be an issue if these companies werent then insisting that the bar is set just right.

These things understand language perfectly, theyve solved NLP because thats what they model extremely well. But agentic stuff is modelled by reinforcement learning and until thats in the foundation model itself (at the token prediction level) these things have no real understanding of state spaces being a recursive function of action spaces and such stuff. And they cant autonomously code or drive or manage a fund until they do

iagooar|8 months ago

Humans use tools, so does AI. Does us make any less valuable as humans because we use bicycles and hammers? Why would it be bad for an AI to use tools?