top | item 46660081

(no title)

smj-edison | 1 month ago

I think one thing I've heard missing from discussions though is that each level of abstraction needs to be introspectable. LLMs get compared to compilers a lot, so I'd like to ask: what is the equivalent of dumping the tokens, AST, SSA, IR, optimization passes, and assembly?

That's where I find the analogy on thin ice, because somebody has to understand the layers and their transformations.

discuss

order

fn-mote|1 month ago

“Needs to be” is a strong claim. The skill of debugging complex problems by stepping through disassembly to find a compiler error is very specialized. Few can do it. Most applications don’t need that “introspection”. They need the “encapsulation” and faith that the lower layers work well 99.9+% of the time, and they need to know who to call when it fails.

I’m not saying generative AI meets this standard, but it’s different from what you’re saying.

smj-edison|1 month ago

Sorry, I should clarify: it's needs to be introspectable by somebody. Not every programmer needs to be able to introspect the lower layers, but that capability needs to exist.

Now I guess you can read the code an LLM generates, so maybe that layer does exist. But, that's why I don't like the idea of making a programming language for LLMs, by LLMs, that's inscrutable by humans. A lot of those intermediate layers in compilers are designed for humans, with only assembly generation being made for the CPU.