top | item 44675679

(no title)

amccollum | 7 months ago

If everyone is using LLMs to write new code, and LLMs are trained on existing code from the internet, that creates an enormous barrier to the adoption of new programming languages, because no new code will be written in them, therefore LLMs will never learn to write the code. It is a self-reinforcing cycle.

I've experienced this to some degree already in using LLMs to write Zig code (ironically, for my own pet programming language). Because Zig is still evolving so quickly, often the code the LLM produces is wrong because it's based on examples targeting incompatible prior versions of the language. Alternatively, if you ask an LLM to try to write code for a more esoteric language (e.g., Faust), the results are generally pretty terrible.

discuss

order

gnulinux|7 months ago

Fine-tuning existing base models on your programming language is pretty practical. [1] You might need a very good and large dataset but that's hardly a problem for a programming language you're generating because you better have the ability generate programs for fuzzing your compiler anyway.

[1] There are a lot of models that achieve this. E.g. Goedel-Prover-V2-32B [2] is a model based off of Qwen3-32B and fine tuned on Lean proofs. It works extremely well. I personally tried further fine tuning this model on Agda and although my dataset was pretty sloppy and small, it was pretty successful. If you actually sit down and generate a large dataset with variety it's pretty reachable to fine tune it for any similar prog lang.

[2] https://huggingface.co/Goedel-LM/Goedel-Prover-V2-32B

sshine|7 months ago

> enormous barrier to the adoption of new programming languages, because no new code will be written in them, therefore LLMs will never learn to write the code

Let’s see.

I’ve vibe-coded some apps with TypeScript and react, not knowing react at all, because I thought it’s the most exemplified framework online.

But I came to a point where my app was too buggy and diverged, and being unable to debug it, I refactored it to Vue, since I personally know it better.

My point is that just because there’s more training data, the quietly is not necessarily excellent; I ended up with a mixture of conflicting idioms seasoned react developers would have frowned upon.

Picking a less exemplified language and supplementing with more of your knowledge of the language might yield better results. E.g. while the AI can’t write better Rust on its own, I don’t mind contributing with Rust code myself more often.

roygbiv2|7 months ago

> But I came to a point where my app was too buggy and diverged, and being unable to debug it, I refactored it to Vue, since I personally know it better.

One of the many pitfalls with using an llm to write code. It's very easy to find yourself with a codebase you know nothing about that you can't progress any further because it keeps breaking.

wolttam|7 months ago

Let's not underestimate LLM's ability to do in-context learning. Perhaps it can just read the new lang's docs and apply what it already knows from other languages

middayc|7 months ago

But didn't LLMs read all the math books and can't really do arithmetics (they need special modes / hacks / python to do it I think)?

So why would they be able to "read" the docs and use that knowledge except up to pattern matching level. That's why I also assume, that tons of examples with results would do better than lang docs, but I haven't tested it yet.

melagonster|7 months ago

What if we require LLM to write anything in Brainf**? If the language design is small enough to insert into our message every time, maybe it can work well.