(no title)
bgirard | 20 days ago
- Ohh look it can [write small function / do a small rocket hop] but it can't [ write a compiler / get to orbit]!
- Ohh look it can [write a toy compiler / get to orbit] but it can't [compile linux / be reusable]
- Ohh look it can [compile linux / get reusable orbital rocket] but it can't [build a compiler that rivals GCC / turn the rockets around fast enough]
- <Denial despite the insane rate of progress>
There's no reason to keep building this compiler just to prove this point. But I bet it would catch up real fast to GCC with a fraction of the resources if it was guided by a few compiler engineers in the loop.
We're going to see a lot of disruption come from AI assisted development.
jeffreygoesto|20 days ago
bgirard|20 days ago
But also we've just witnessed LLMs go from being a glorified line auto-complete tool to it writing a C compiler in ~3 years. And I think that's something. And noting how we keep moving the goal post.
yourapostasy|20 days ago
This I strongly suspect is the crux of the boundaries of their current usefulness. Without accompanying legibility/visibility into the lineage of those decisions, LLM's will be unable to copy the reasoning behind the "why", missing out on a pile of context that I'm guessing is necessary (just like with people) to come up to speed on the decision flow going forward as the mathematical space for the gradient descent to traverse gets both bigger and more complex.
We're already seeing glimmers of this as the frontier labs are reporting that explaining the "why" behind prompts is getting better results in a non-trivial number of cases.
I wonder whether we're barely scratching the surface of just how powerful natural language is.
itsyonas|20 days ago
> - <Denial despite the insane rate of progress>
Sure, but not by what was actually promised. There may also be fundamental limitations to what the current architecture of LLMs can achieve. The vast majority of LLMs are still based on Transformers, which were introduced almost a decade ago. If you look at the history of AI, it wouldn't be the first time that a roadblock stalled progress for decades.
> But I bet it would catch up real fast to GCC with a fraction of the resources if it was guided by a few compiler engineers in the loop.
Okay, so at that point, we would have proved that AI can replicate an existing software project using hundreds of thousands of dollars of computing power and probably millions of dollars in human labour costs from highly skilled domain experts.
jopsen|20 days ago
Most of the time when you're writing a compiler for a new language, you'll be doing things that have been done before.
Because most of the concepts in your language are brought along from somewhere else.
That said: I'd always want a compiler and language designs to be well considered. Ideally, the authors have some proofs of soundness in their heads.
Perhaps LLM will make formal verification more feasible (from a cost perspective) and then our mind about what reliable software is might change.
raincole|20 days ago
Yeah but the speed of progress can never catch the speed of a moving goalpost!
wrxd|20 days ago
I still won’t use it while it also matches all the non-functional requirements but you’re free to go and recompile all the software you use with it.
friendzis|20 days ago
How do you like those coast-to-coast self drives since the end of 2017?
samultio|20 days ago
andriamanitra|20 days ago
What is interesting is what can do with LLMs today and what we would like them to be able to do tomorrow so we can keep developing them into a good direction. Whether or not you (or I) believe it can do that thing tomorrow is thoroughly uninteresting.
gjulianm|20 days ago
Ygg2|20 days ago
Human crews on Mars is just as far fetched as it ever was. Maybe even farther due to Starlink trying to achieve Kessler syndrome by 2050.
forty|20 days ago
Interesting that people call this "progress" :)
littlestymaar|20 days ago
The problem is that it is absolutely indiscernible from the Theranos conversation as well…
If Anthropic stopped making lies about the current capability of their models (like “it compiles the Linux kernel” here, but it's far from the first time they do that), maybe neutral people would give them the benefit of the doubt.
For one grifter that happen to succeed at delivering his grandiose promises (Elon), how many grifters will fail?
unknown|20 days ago
[deleted]
unknown|18 days ago
[deleted]
gordonhart|20 days ago
benreesman|20 days ago
But "reliable, durable, scalable outcomes in adversarial real-world scenarios" is not convincingly demonstrated in public, the asterisks are load bearing as GPT 5.2 Pro would say.
That game is still on, and AI assist beyond FIM is still premature for safety critical or generally outcome critical applications: i.e. you can do it if it doesn't have to work.
I've got a horse in this race which is formal methods as the methodology and AI assist as the thing that makes it economically viable. My stuff is north of demonstrated in the small and south of proven in the large, it's still a bet.
But I like the stock. The no free lunch thing here is that AI can turn specifications into code if the specification is already so precise that it is code.
The irreducible heavy lift is that someone has to prompt it, and if the input is vibes the output will be vibes. If the input is zero sorry rigor... you've just moved the cost around.
The modern software industry is an expensive exercise in "how do we capture all the value and redirect it from expert computer scientists to some arbitrary financier".
You can't. Not at less than the cost of the experts if the outcomes are non-negotiable.
a1o|20 days ago
delaminator|20 days ago
In 1935 the Auburn 851 S/C Speedster hit 100mph
In 1955 the Mercedes-Benz 300 SL Gullwing did 161mph
In 2025 the Yangwang U9 Xtreme hit 308mph
progress is a decaying exponential - Tsiolkovsky's tyranny
CleaveIt2Beaver|20 days ago
a1o|20 days ago