top | item 45718016

(no title)

seer | 4 months ago

Isn't this compiled languages vs writing pure machine code argument all over again?

The compiler produces a metric shit ton of code that I don't see when I'm writing C++ code. And don't get me started on TypeScript/Clojure - the amount of code that gets written underneath is mindbogglingly staggering, yet I don't see it, for me the code is "clean".

And I'm old enough to remember the tail end of the MachineCode -> CompiledCode transition, and have certainly lived through CompiledCode -> InterpretedCode -> TranspiledCode ones.

There were certainly people who knew the ins and outs of the underlying technology who produced some stunningly fast and beautiful code, but the march of progress was inevitable and they were gradually driven to obscurity.

This recent LLM step just feels like more of the same. *I* know how to write an optimized routine that the LLM will stumble to do cleanly, but back in the day lots of assembler wizards were doing some crazy stuff, stuff that I admired but didn't have the time to replicate.

I imagine in the next 10-20 years we will have Devs that _only_ know English, are trained in classical logic and have flame wars about what code exactly would their tools generate given various sentence invocations. And people would benchmark and investigate the way we currently do about JIT compilation and CPU caching - very few know how it actually works but the rest don't have to, as long as the machine produces the results we want.

Just one more step on the abstraction ladder.

The "Mars" trilogy by Kim Stanley Robinson had very cool extrapolations where this all could lead, technologically, politically, social and morally. LLMs didn't exists when he was writing it, but he predicted it anyway.

discuss

order

jimbokun|4 months ago

You don't have to review the compiler output because it's deterministic, thoroughly tested, predictable, consistent and reliable.

You have to review all the LLM output carefully because it could decide to bullshit anything at any given time so you must always be on high alert.

seer|4 months ago

Ha! That’s what is actually happening under the hood, but is definitely not the experience of using it. If you are not into CS or you haven’t coded in the abstraction below, it can be very tough to figure out what exactly is going on, and reactions to your high level code become random.

A lot of people (me included) would have a model of what is going on when I wrote some particular code, but sometimes the compiler just doesn’t do what you think it would do - the jit will not run, some data would not be mapped in the correct format, and your code will magically not do what you wanted it to.

Things do “stabilise” - before TypeScript there was a slew of transpiled languages and with some of them you really had nasty bugs that you didn’t know how they are being triggered.

With ruby, there was so many memory leaks that you just gave up and periodically restarted the whole thing cause there was no chance of figuring it out.

Yes things were “deterministic” but sometimes less so and we built patterns and processes around that uncertainty. We still do for a lot of things.

While things are very very different, the emotion of “reigning in” an agent gone off the rails feels kinda familiar, on a superficial level.

pseudalopex|4 months ago

A stronger plausible interpretation of their comment is understanding meant evaluating correctness. Not performance.

Higher level languages did not hinder evaluating correctness.

Formal languages exist because natural languages are ambiguous inevitably.

spockz|4 months ago

Exactly, understanding correctness of the code. But also understanding of a codebase what its purpose is and what it should be doing. Add to that how the codebase is layed out. By adding more cruft the details fade away in the background making it harder to understand the Crux of the application.

Measuring performance is relatively easy regardless of whether the code was generated by AI or not.