top | item 46431252

(no title)

gentooflux | 2 months ago

LLMs generate the most likely code given the problem they're presented and everything they've been trained on, they don't actually understand how (or even if) it works. I only ever get away with that when I'm writing a parser.

discuss

order

chii|2 months ago

> they don't actually understand how

but if it empirically works, does it matter if the "intelligence" doesn't "understand" it?

Does a chess engine "understand" the moves it makes?

goatlover|2 months ago

It matters if AGI is the goal. If it remains a tool to make workers more productive, then it doesn't need to truly understand, since the humans using the tools understand. I'm of the opinion AI should have stood for Augmented (Human) Intelligence outside of science fiction. I believe that's what early pioneers like Douglas Engalbert thought. Clearly that's what Steve Jobs and Alan Kay thought computing was for.

gentooflux|2 months ago

If it empirically works, then sure. If instead every single solution it provides beyond a few trivial lines falls somewhere between "just a little bit off" and "relies entirely on core library functionality that doesn't actually exist" then I'd say it does matter and it's only slightly better than an opaque box that spouts random nonsense (which will soon include ads).

jvanderbot|2 months ago

This is a semantic dead end when discussing results and career choices