top | item 46557662

(no title)

iamwil | 1 month ago

I have a hunch we'll eventually swing back when we find the limits of vibe coding--in that LLMs also can only hold so much complexity in their heads, even if it's an order of magnitude (or more) greater than ours. If we make it understandable for humans then it'll definitely be trivial for LLMs, which frees them up to do other things. I mean, they don't have infinite layers or units to capture concepts. So the more symmetrical, consistent, and fractal (composable) you can make your code, the easier time an LLM will have with it to solve problems.

discuss

order

falloutx|1 month ago

LLM's context window limit already hits you in the nose when you have a big codebase and you ask it questions which make it read a lot of code. 200k is so easy to hit sometimes, especially when you only truly get to use 120k

qwery|1 month ago

LLMs have no heads.

No one has, to my knowledge, demonstrated a machine learning program with any understanding or complexity of behaviour exceeding that of a human.

LLMs don't have understanding.

Frees up who, the LLM or the human? Same question for "they".

What does symmetrical, fractal code look like in this context? How does this property assist the LLM's parser?

iamwil|1 month ago

Of course they have no literal heads. Please use a more gracious interpretation when reading.