(no title)
thorum | 1 month ago
My understanding/experience is that LLM performance in a language scales with how well the language is represented in the training data.
From that assumption, we might expect LLMs to actually do better with an existing language for which more training code is available, even if that language is more complex and seems like it should be “harder” to understand.
adastra22|1 month ago
This does fill up context a little faster, (1) not as much as debugging the problem would have in a dynamic language, and (2) better agentic frameworks are coming that “rewrite” context history for dynamic on the fly context compression.
root_axis|1 month ago
This isn't even true today. Source: heavy user of claude code and gemini with rust for almost 2 years now.
GrowingSideways|1 month ago
This is such a silly thing to say. Either you set the bar so low that "hello world" qualifies or you expect LLMs to be able to reason about lifetimes, which they clearly cannot. But LLMs were never very good at full-program reasoning in any language.
I don't see this language fixing this, but it's not trying to—it just seems to be removing cruft
bevr1337|1 month ago
I still experience agents slipping in a `todo!` and other hacks to get code to compile, lint, and pass tests.
The loop with tests and doc tests are really nice, agreed, but it'll still shit out bad code.
PunchyHamster|1 month ago
vessenes|1 month ago
Additionally just the ability to put an entire language into context for an LLM - a single document explaining everything - is also likely to close the gap.
I was skimming some nano files and while I can't say I loved how it looked, it did look extremely clear. Likely a benefit.
btown|1 month ago
nemo1618|1 month ago
Eventually AIs will create their own languages. And humans will, of course, continue designing hobbyist languages for fun. But in terms of influence, there will not be another human language that takes the programming world by storm. There simply is not enough time left.
rzmmm|1 month ago
unknown|1 month ago
[deleted]
nl|1 month ago
This isn't really true. LLMs understand grammars really really well. If you have a grammar for your language the LLM can one-shot perfect code.
What they don't know is the tooling around the language. But again, this is pretty easily fixed - they are good at exploring cli tools.
vidarh|1 month ago
In the long term I expect it won't matter - already GPT3.5 was able to reason about the basic semantics of programs in languages "synthesised" zero-shot in context by just describing it as a combination of existing languages (e.g. "Ruby with INTERCAL's COME FROM") or by providing a grammar (e.g. simple EBNF plus some notes on new/different constructs) reasonably well and could explain what a program written in a franken-language it had not seen before was likely to do.
I think long before there is enough training data for a new language to be on equal grounds in that respect, we should expect the models to be good enough at this that you could just provide a terse language spec.
But at the same time, I'd expect the same improvement to future models to be good enough at working with existing languages that it's pointless to tailor languages to LLMs.
Zigurd|1 month ago
The characteristics of failures have been interesting: As I anticipated it might be, an over ambitious refactoring was a train wreck, easily reverted. But something as simple as regenerating Android launcher icons in a Flutter project was a total blind spot. I had to Google that like some kind of naked savage running through the jungle.
nl|1 month ago
Getting the Doom sound working on it involved me setting there typing "No I can't hear anything" over and over until it magically worked...
Maybe I should have written a helper program to listen using the microphone or something.
nxobject|1 month ago
NewsaHackO|1 month ago
cmrdporcupine|1 month ago
As others said, the key is feedback and prompting. In a model with long context, it'll figure it out.
rocha|1 month ago
vidarh|1 month ago
boxed|1 month ago
whimsicalism|1 month ago
measurablefunc|1 month ago