top | item 46932818

(no title)

matheus-rr | 23 days ago

The intermediate product argument is the strongest point in this thread. When we went from assembly to C, the debugging experience changed fundamentally. When we went from C to Java, how we thought about memory changed. With LLMs, I'm still debugging the same TypeScript and Python I was before.

The generation step changed. The maintenance step didn't. And most codebases spend 90% of their life in maintenance mode.

The real test of whether prompts become a "language" is whether they become versioned, reviewed artifacts that teams commit to repos. Right now they're closer to Slack messages than source files. Until prompt-to-binary is reliable enough that nobody reads the intermediate code, the analogy doesn't hold.

discuss

order

andai|23 days ago

>With LLMs, I'm still debugging the same TypeScript and Python I was before.

Aren't you telling Claude/Codex to debug it for you?

pjmlp|23 days ago

We went from Assembly to Fortran, with several languages in between, until C came to be almost 15 years later.

surajrmal|23 days ago

Note that a lot of people also still work in C.

BudapestMemora|23 days ago

"Until prompt-to-binary is reliable enough that nobody reads the intermediate code, the analogy doesn't hold."

1. OK, let's create 100 instances of prompt under the hood, 1-2 will hallucinate, 3-5 will produce something different from 90% of remaining, and it can compile based on 90% of answers

2. computer memory is also not 100% reliable , but we live with it somehow without man-in-the-middle manually check layer?

whoisthemachine|23 days ago

Computer memory, even cheap consumer grade stuff, has much higher reliability than 90%. Otherwise your computer would be completely unusable!

Lwerewolf|23 days ago

I wonder what ECC is for. So, unless you're Google and you're having to deal with "mercurial cores"...

Also, sorry, but what did I just actually attempt to read?