top | item 47325595

I built a programming language using Claude Code

135 points| GeneralMaximus | 15 days ago |ankursethi.com | reply

183 comments

order
[+] andsoitis|15 days ago|reply
> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).

Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.

Programming languages are after all the interface that a human uses to give instructions to a computer. If you’re not writing or reading it, the language, by definition doesn’t matter.

[+] marssaxman|15 days ago|reply
The constraints enforced in the language still matter. A language which offers certain correctness guarantees may still be the most efficient way to build a particular piece of software even when it's a machine writing the code.

There may actually be more value in creating specialized languages now, not less. Most new languages historically go nowhere because convincing human programmers to spend the time it would take to learn them is difficult, but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.

[+] voxleone|15 days ago|reply
In the 90s people hoped Unified Modeling Language diagrams would generate software automatically. That mostly didn’t happen. But large language models might actually be the realization of that old dream. Instead of formal diagrams, we describe the system in natural language and the model produces the code. It reminds me of the old debates around visual web tools vs hand-written HTML. There seems to be a recurring pattern: every step up the abstraction ladder creates tension between people who prefer the new layer and those who want to stay closer to the underlying mechanics.

Roughly: machine code --> assembly --> C --> high-level languages --> frameworks --> visual tools --> LLM-assisted coding. Most of those transitions were controversial at the time, but in retrospect they mostly expanded the toolbox rather than replacing the lower layers.

One workflow I’ve found useful with LLMs is to treat them more like a code generator after the design phase. I first define the constraints, objects, actors, and flows of the system, then use structured prompts to generate or refine pieces of the implementation.

[+] spelunker|15 days ago|reply
Like everything generated by LLMs though, it is built on the shoulders of giants - what will happen to software if no one is creating new programming languages anymore? Does that matter?
[+] _aavaa_|15 days ago|reply
I don’t agree with the idea that programming languages don’t have an impact of an LLM to write code. If anything, I imagine that, all else being equal, a language where the compiler enforces multiple levels of correctness would help the AI get to a goal faster.
[+] michaelbrave|15 days ago|reply
a few months back I had a similar thought and started working on a language that was really verbose and human readable, think Cobal with influences from Swift. The core idea was that this would be a business language that business people would/could read if they needed to, so it could be used for financial and similar use cases, with built in logic engines similar to Prolog or Mercury. My idea was that once the language starts being coded by AI there are two directions to go, either we max efficiency and speed (basically let the AI code in assembly) or we lean the other way and max it for human error checking and clear outputs on how a process flows, so my theory was headed more in that direction. But of course I failed, I'd never made a programming language before (I've coded a long time, but that's not the same thing) and the AI's at the time combined with my lack of knowledge caused a spectacular failure. I still think my theory is correct though, especially if we want to see financial or business logic, having the code be more human readable to check for problems when even not a technical person, I still see a future where that is useful.
[+] onlyrealcuzzo|15 days ago|reply
> Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.

I'm working on a language as well (hoping to debut by end of month), but the premise of the language is that it's designed like so:

1) It maximizes local reasoning and minimizes global complexity

2) It makes the vast majority of bugs / illegal states impossible to represent

3) It makes writing correct, concurrent code as maximally expressive as possible (where LLMs excel)

4) It maximizes optionality for performance increases (it's always just flipping option switches - mostly at the class and function input level, occassionaly at the instruction level)

The idea is that it should be as easy as possible for an LLM to write it (especially convert other languages to), and as easy as possible for you to understand it, while being almost as fast as absolutely perfect C code, and by virtue of the design of the language - at the human review phase you have minimal concerns of hidden gotcha bugs.

[+] eatsyourtacos|15 days ago|reply
I have been building a game via a separate game logic library and Unity (which includes that independent library).. let's just say that over the last couple weeks I have 100% lost the need to do the coding myself. I keep iterating and have it improve and there are hundreds of unit tests.. I have a Unity MCP and it does 95% of the Unity work for me. Of course the real game will need custom designing and all that; but in terms of getting a complete prototype setup.... I am literally no longer the coder. I just did in a week what it would have taken me months and months and months to do. Granted Unity is still somewhat new to, but still.. even if you are an expert- it can immediately look at all your game objects and detect issues etc.

So yeah for some things we are already at the point of "I am not longer the coder, I am the architect".. and it's scary.

[+] johnfn|15 days ago|reply
> If you’re not writing or reading it, the language, by definition doesn’t matter.

By what definition? It still matters if I write my app in Rust vs say Python because the Rust version still have better performance characteristics.

[+] johnbender|15 days ago|reply
In principle (and we hope in practice) the person is still responsible for the consequences of running the code and so it remains important they can read and understand what has been generated.
[+] koolala|15 days ago|reply
Saves tokens. The main reason though is to manage performance for what techniques get used for specific use cases. In their case it seems to be about expressiveness in Bash.
[+] andyfilms1|15 days ago|reply
I've been wondering if a diffusion model could just generate software as binary that could be fed directly into memory.
[+] gopalv|15 days ago|reply
> More addictive than that is the unpredictability and randomness inherent to these tools. If you throw a problem at Claude, you can never tell what it will come up with. It could one-shot a difficult problem you’ve been stuck on for weeks, or it could make a huge mess. Just like a slot machine, you can never tell what might happen. That creates a strong urge to try using it for everything all the time.

That is the part of the post that stuck with me, because I've also picked up impossible challenges and tried to get Claude to dig me out of a mess without giving up from very vague instructions[1].

The effect feels like the Loss-Disguised-As-Win feeling of the video-games I used to work on at Zynga.

Sure it made a mistake, but it is right there, you could go again.

Pull the lever, doesn't matter if the kids have Karate at 8 AM.

[1] - https://github.com/t3rmin4t0r/magic-partitioning

[+] fud101|15 days ago|reply
> The effect feels like the Loss-Disguised-As-Win feeling of the video-games I used to work on at Zynga.

If you can write a blogpost for this i'd like to read it.

[+] bobjordan|15 days ago|reply
I've been working on a large codebase that was already significant before LLM-assisted programming, leveraging code I’d written over a decade ago. Since integrating Claude and Codex, the system has evolved and grown massively. Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.

That said, the core value of the software wouldn't exist without a human at the helm. It requires someone to expend the energy to guide it, explore the problem space, and weave hundreds of micro-plans into a coherent, usable system. It's a symbiotic relationship, but the ownership is clear. It’s like building a house: I could build one with a butter knife given enough time, but I'd rather use power tools. The tools don't own the house.

At this point, LLMs aren't going to autonomously architect a 400+ table schema, network 100+ services together, and build the UI/UX/CLI to interface with it all. Maybe we'll get there one day, but right now, building software at this scale still requires us to drive. I believe the author owns the language.

[+] wcarss|15 days ago|reply
This is the take, very well said. I've been trying to use analogies with cars and cabinet making, but building a house is just right for the scale and complexity of the efforts enabled, and the ownership idea threads into it well.

Going into the vault!

[+] heavyset_go|15 days ago|reply
> I believe the author owns the language.

Not according to the US Copyright Office. It is 100% LLM output, so it is not copyrighted, thus it's free for anyone to do anything with it and no claimed ownership or license can stop them.

[+] _zagj|15 days ago|reply
> Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.

I have yet to see a study showing something like a 2x or better boost in programmer productivity through LLMs. Usually it's something like 10-30%, depending on what metrics you use (which I don't doubt). Maybe it's 50% with frontier models, but seeing these comments on HN where people act like they're 10x more productive with these tools is strange.

[+] aleksiy123|15 days ago|reply
One topic of llms not doing well with UI and visuals.

I've been trying a new approach I call CLI first. I realized CLI tools are designed to be used both by humans (command line) and machines (scripting), and are perfect for llms as they are text only interface.

Essentially instead of trying to get llm to generate a fully functioning UI app. You focus on building a local CLI tool first.

CLI tool is cheaper, simpler, but still has a real human UX that pure APIs don't.

You can get the llm to actually walk through the flows, and journeys like a real user end to end, and it will actually see the awkwardness or gaps in design.

Your commands structure will very roughly map to your resources or pages.

Once you are satisfied with the capability of the cli tool. (Which may actually be enough, or just local ui)

You can get it to build the remote storage, then the apis, finally the frontend.

All the while you can still tell it to use the cli to test through the flows and journeys, against real tasks that you have, and iterate on it.

I did recently for pulling some of my personal financial data and reporting it. And now I'm doing this for another TTS automation I've wanted for a while.

[+] asciimov|15 days ago|reply
This takes all the satisfaction out of spending a few well thought out weekends to build your own language. So many fun options: compiled or interpreted; virtual machine, or not; single pass, double pass, or (Leeloo Dallas) Multipass? No cool BNF grammars to show off either…

It’s missing all the heart, the soul, of deciding and trading off options to get something to work just for you. It’s like you bought a rat bike from your local junkyard and are trying to pass it off as your own handmade cafe racer.

[+] fcatalan|15 days ago|reply
This enables different satisfactions. You can still choose all your options but have a working repl or small compiler where you are trying them within minutes.

Also you decide how much in control you are. Want to provide a hand made grammar? go ahead, want the agent to come up with it just from chatting and pointing it to other languages, ok too. Want to program just the first arithmetic operator yourself and then save the tedium of typing all the others so you can go to the next step? fine...

So you can have a huge toy language in mere days and experiment with stuff you'd have to build for months by hand to be able to play with.

[+] NuclearPM|15 days ago|reply
Deciding on the syntax and semantics myself and using AI to help implement my toy language has been very rewarding.

Mine is an Io and Rebol inspired language that uses SQlite and Luajit as a runtime.

1.to 10 .map[n | n * n].each[n | n.say!]

[+] Bnjoroge|15 days ago|reply
Not to discount your experience, but I dont understand what's interesting about this. You could always build a programming language yourself, given enough time. Programming languages' constructs are well represented in the training dataset. I want someone to build something uniquely novel that's not actually in the dataset and i'll be impressed by CC.
[+] UncleEntity|14 days ago|reply
I find it as an interesting experiment to find the limits of what they can do.

Like, I've had it build a full APL interpreter, half an optimizer, started on a copy-and-patch JIT compiler and it completely fails at "read the spec and make sure the test suite ensures compliance". Plus some additional artifacts which are genuinely useful on their own as I now have an Automated Yak Shaver™ which is where most of my projects ended up dying as the yaks are a fun bunch to play with.

[+] pluc|15 days ago|reply
Claude Code built a programming language using you
[+] ramon156|15 days ago|reply
AI written code with a human writted blog post, that's a big step up.

That said, it's a lot of words to say not a lot of things. Still a cool post, though!

[+] ivanjermakov|15 days ago|reply
> with a human writted blog post

I believe we're at a point where it's not possible to accurately decide whether text is completely written by human, by computer, or something in between.

[+] Bnjoroge|15 days ago|reply
Agree. I've been yearning for more insightful posts and there's just not alot of them out there these days
[+] righthand|15 days ago|reply
> I’ve also been able to radically reduce my dependency on third-party libraries in my JavaScript and Python projects. I often use LLMs to generate small utility functions that previously required pulling in dependencies from NPM or PyPI.

This is such an interesting statement to me in the context of leftpad.

[+] rpowers|15 days ago|reply
I'm imagining the amount of energy required to power the datacenter so that we can produce isEven() utility methods.
[+] tines|15 days ago|reply
Next you can let Claude play your video games for you as well. Gads we are a voyeuristic society aren’t we.
[+] ajay-b|15 days ago|reply
Why not let Claude do our dating? I'm surprised someone hasn't thought of this: AI dating, let the AI find and qualify a date for you, and match with the person who meets you, for you!
[+] jetbalsa|15 days ago|reply
I am kind of doing that now. I put Kimi K2.5 into a Ralph Loop to make a Screeps.com AI. So far its been awful at it. If you want to track its progress, I have its dashboard at https://balsa.info
[+] jaggederest|15 days ago|reply
I think we're going to see a lot more of this. I've done a similar thing, hosting a toy language on haskell, and it was remarkably easy to get something useful and usable, in basically a weekend. If you keep the surface area small enough you can now make a fully fledged, compiled language for basically every single purpose you'd like, and coevolve the language, the code, and the compiler
[+] marginalia_nu|15 days ago|reply
Yeah it's a rewarding project. Getting a language that kinda works is surprisingly accessible. Though we must be mindful that this is still the "draw some circles" pane. Producing the rest of the rest of the famous owl is, as always, the hard bit.
[+] soperj|15 days ago|reply
We did this in 4th year comp-sci.
[+] kreek|15 days ago|reply
This is the second "I built a programming language" post in a day, and if I post the one I'm building, we can have a three-day streak :D They thought AI meant personal software, but it also means personal programming languages!

In all seriousness, this is great, and why not? As the post said, what once took months now takes weeks. You can experiment and see what works. For me, I started off building a web/API framework with certain correctness built in, and kept hitting the same wall: the guarantees I wanted (structured error handling, API contracts, making invalid states unrepresentable) really belonged at the language level, not bolted onto a framework. A few Claude Code sessions later, I had a spec, then a tree-sitter implementation, then a VM/JIT... something that, given my sandwich-generation-ness, I never would have done a few months ago.

[+] bfivyvysj|15 days ago|reply
I should post number 4, last week I built a new lisp framework for LLMs as first class programmers. It compiles for go, python, and JS.
[+] laweijfmvo|15 days ago|reply
Using LLMs to invent new programming languages is a mystery to me. Who or what is going to use this? Presumably not the author.
[+] wr639|14 days ago|reply
I am not very experienced but I was able to use Claude Code to build a test website, for my music, a test CRM, and me and my partner were planning to use it to make AI chatbots, when we were marketing these types of services last year. There were times when I found it frustrating due to my lack of experience. Someone mentioned token efficiency below and all the AI's seem to be designed to make you have to have it do things that you ask over and over again so it can use up as much tokens as it can before it gives you exactly what you need. Of course this may also be due to my lack of experience. I do have a friend with a great deal of experience who I go to for help and he even has issues with it at times.
[+] jackby03|15 days ago|reply
Curious how you handled context management as the project grew — did you end up with a single CLAUDE.md or something more structured? I've been thinking about this problem and working on a standard for it.
[+] dybber|15 days ago|reply
I have been trying this as well, and you can quickly come very far.

However, I fear that agents will always work better on programming languages they have been heavily trained on, so for an agent-based development inventing a new domain specific language (e.g. for use internally in a company) might not be as efficient as using a generic programming language that models are already trained on and then just live with the extra boilerplate necessary.

[+] p0w3n3d|15 days ago|reply
I'd say these times will be filled with a lot of tailored-to-you "self"-made software, but the question is, are we increasing amount of information in the world? I heard that claude and chatgpt are getting good at mathematical proofs which give really something to our knowledge, but all other things are neutral to entropy, if not decreasing. Strange time to live in, strange valuations and devaluations...
[+] NuclearPM|15 days ago|reply
Neutral to entropy? What do you mean?
[+] amelius|15 days ago|reply
The AI age is calling for a language that is append-only, so we can write in a literate programming style and mix prompts with AI output, in a linear way.