(no title)
itsyonas | 21 days ago
> - <Denial despite the insane rate of progress>
Sure, but not by what was actually promised. There may also be fundamental limitations to what the current architecture of LLMs can achieve. The vast majority of LLMs are still based on Transformers, which were introduced almost a decade ago. If you look at the history of AI, it wouldn't be the first time that a roadblock stalled progress for decades.
> But I bet it would catch up real fast to GCC with a fraction of the resources if it was guided by a few compiler engineers in the loop.
Okay, so at that point, we would have proved that AI can replicate an existing software project using hundreds of thousands of dollars of computing power and probably millions of dollars in human labour costs from highly skilled domain experts.
jopsen|20 days ago
Most of the time when you're writing a compiler for a new language, you'll be doing things that have been done before.
Because most of the concepts in your language are brought along from somewhere else.
That said: I'd always want a compiler and language designs to be well considered. Ideally, the authors have some proofs of soundness in their heads.
Perhaps LLM will make formal verification more feasible (from a cost perspective) and then our mind about what reliable software is might change.