Good to see more people talking about this. I wrote about this about 6 months ago, when I first noticed how LLM usage is pushing a lot of people back towards older programming languages, older frameworks, and more basic designs: https://nathanpeck.com/how-llms-of-today-are-secretly-shapin...
To be honest I don't think this is necessarily a bad thing, but it does mean that there is a stifling effect on fresh new DSL's and frameworks. It isn't an unsolvable problem, particularly now that all the most popular coding agents have MCP support that allows you to bring in custom documentation context. However, there will always be a strong force in LLM's pushing users towards the runtimes and frameworks that have the most training data in the LLM.
I think it's healthy, because it creates an undercurrent against building a higher abstraction tower. That's been a major issue: we make the stack deeper and build more of a "Swiss Army Knife" language because it lets us address something local to us, and in exchange it creates a Conway's Law problem for someone else later when they have to decipher generational "lava layers" as the trends of the marketplace shift and one new thing is abandoned for another.
The new way would be to build a disposable jig instead of a Swiss Army Knife: The LLM can be prompted into being enough of a DSL that you can stand up some placeholder code with it, supplemented with key elements that need a senior dev's touch.
The resulting code will look primitive and behave in primitive ways, which at the outset creates a myriad of inconsistency, but is OK for maintenance over the long run: primitive code is easy to "harvest" into abstract code, the reverse is not so simple.
It reminds me of this excerpt from Coders at Work, in Chapter 13 - Fran Allen:
Seibel: When do you think was the last time that you programmed?
Allen: Oh, it was quite a while ago. I kind of stopped when C came out.
That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization.
The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue. The motivation for the design of C was three problems they couldn't solve in the high-level languages: One of them was interrupt handling. Another was scheduling resources, taking over the machine and scheduling a process that was in the queue. And a third one was allocating memory. And you couldn't do that from a high-level language.
So that was the excuse for C.
Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels?
Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve.
By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are . . . basically not taught much anymore in the colleges and universities.
Oh that's a great blog post and a very interesting point. Yep, I hadn't considered how LLMs would affect frameworks in existing languages, but it makes sense that there's a very similar effect of reinforcing the incumbents and stifling innovation.
I'd argue that the problem of solving this effect in DSLs might be a bit harder than for frameworks, because DSLs can have wildly different semantics (imagine for example a logic programming DSL a la prolog, vs a functional DSL a la haskell), so these don't fit as nicely into the framework of MCPs maybe. I agree that it's not unsolvable though, but it definitely needs more research into.
Linguistics and history of language folk: isn't there an observed slowdown of evolution of spoken language as the printing press becomes widespread? Also, "international english"?
> To be honest I don't think this is necessarily a bad thing,
I do. Would you really argue we discovered perfection in the first sixty years of computer science? In the first sixty years of chemistry we still believed in phlogiston
It’s not just new frameworks, it’s new features. Good luck getting a LLM to write code that uses iOS 26 features, for example.
I’m not convinced simply getting the LLM to inject documentation about the features will work well (perhaps someone has studied this?) because the reason they’re good at doing ‘well known’ things is the plethora of actual examples they’re trained on.
The title should be "DSLs pose an interesting problem for LLM users".
It is significant that LLMs in coding are being promoted based on a set of promises (and assumptions) that are getting instantly and completely reversed the moment the technology gets an iota of social adoption in some space.
"Everyone can code now!" -> "Everyone must learn a highly specialized set of techniques to prompt, test generated code, etc."
"LLMs are smart and can effortlessly interface with pre-existing technologies" -> "You must adopt these agent protocols, now"
"LLMs are great at 0-shot learning" -> "I will not use this language/library/version of tool, because my model isn't trained on its examples"
"LLMs effortlessly understand existing code" -> "You must change your code specifically to be understood by LLMs"
> Suddenly the opportunity cost for a DSL has just doubled: in the land of LLMs, a DSL requires not only the investment of build and design the language and tooling itself, but the end users will have to sacrifice the use of LLMs to generate any code for your DSL.
I don't think they will. Provide a concise description + examples for your DSL and the LLM will excel at writing within your DSL. Agents even moreso if you can provide errors. I mean, I guess the article kinda goes in that direction.
But also authoring DSLs is something LLMs can assist with better than most programming tasks. LLMs are pretty great at producing code that's largely just a data pipeline.
Arguably it really depends on your DSL right? If it has a semantics that already lies close to existing programming languages, then I'd agree that a few examples might be sufficient, but what if your particular domain doesn't match as closely?
My main concern is that LLMs might excel at the mundane tasks, but struggle at the more exciting advances, and so now the activation energy for coming up with advances DSLs is going to increase and as a result, the field might stagnate.
This is somewhat my take too. The way most vibe coding happens right now does create a lot of duplication because it's cheap and easy for LLMs to do. But eventually as the things we do with coding assistants become more complex, they're not necessarily going to be able to deal with huge swaths of duplicate code any better than humans are. Given their limited context size, having a DSL that allows them to fit more logic into their context with fewer tokens, we could conceivably see the importance of DSLs start to increase rather than decrease.
I was about to paste the same sentence, and say much the same thing in response.
To add to that... One limitation of LLM for a new DSL is that the LLM may be less likely to directly plagiarize from open source code. That could be a feature.
Another feature could be users doing their own work, and doing a better job of it, instead of "cheating on their homework" with AI slop and plagiarism, whether for school or in the workplace.
At the time, I had given in to Claude 3.5's preference for python when spinning up my first substantive vibe-coded app. I'd never written a line of python before or since, but I just let the waves carry me. Claude and I vibed ourselves into a corner, and given my ignorance, I gave up on fixing things and declared the software done as-is. I'm now the proud owner of a tiny monstrosity that I completely depend on - my own local whisper dictation app with a system tray.
I've continued to think about stack ossification since. Still feels possible, given my recent frustration trying to use animejs v4 via an LLMs. There's a substantial api change between animejs v3 and v4, and no amount of direction or documentation placed in context could stop models from writing against the v3 api.
I see two ways out of the ossification attractor.
The obvious, passive, way out: frontier models cross a chasm with respect to 'putting aside' internalized knowledge (from the training data) in favor of in-context directions or some documentation-RAG solutions. I'm not terribly optimistic here - these models are hip-shooters by nature, and it feels to me that as they get smarter, this reflex feels stronger rather than weaker. Though: Sonnet 4 is generally a better instruction-follower than 3.7, so maybe.
The less obvious way out, which I hope someone is working on, is something like massive model-merging based on many cached micro fine-tunes against specific dependency versions, so that each workspace context can call out to modestly customized LLMs (LoRA style) where usage of incorrect versions of your dependencies has specifically been fine-tuned out.
I recently had to work with the robot framework DSL. Not a fan. I hardly think it's any more readable to a business user than imperative code either. Every DSL is another API to learn and usually full of gotchas. Intuitiveness is in the eye of the beholder . The approach I would take is transpiling from imperative code to a natural language explanation of what is being tested, with configuration around aliases and the like.
Consider MiniZinc. This DSL is super cool and useful for writing constraint-solving problems once and running them through any number of different backend solvers.
A lot of intermediate languages and bytecode (including LLVM itself) are very useful DSLs for representing low-level operations using a well-defined set of primitives.
Codegen DSLs are also amazing for some applications, especially for creating custom boilerplate -- write what's unique to the scenario at hand in the DSL and have the template-based codegen use the provided data to generate code in the target language. This can be a highly flexible approach, and is just one of several types of language-oriented programming (LOP).
I'm not skeptical about DSLs in general, but I agree with you on robot framework. I think it has a few good points like how it formats its HTML output is mostly nice, but how I'm not happy with how tags on test cases work and actually writing anything that's non-trivial is frustrating. It's easy to write Python extensions though so that's where I ended put basically all of logic that wasn't the "business logic" of the tests. I think that's generally what you're supposed to do, but at that point, it seems better to write it all in Python or the language of your choice.
In the LLM era, building a brand-new DSL feels unnecessary. DSLs used to make sense because they gave you a compact, domain-specific syntax that simple parsers could handle. But modern language models can already read, write, and explain mainstream languages effortlessly, and the tooling around those languages—REPLs, compilers, debuggers, libraries—is miles ahead of anything you’d roll on your own. So rather than inventing yet another mini-language, just leverage a well-established one and let the LLM (plus its mature ecosystem) do the heavy lifting.
I can't even trust an LLM to write working Java code, let alone trust it to convert whatever a DSL is supposed to express into another form. Sure, maybe there's not enough Java 23 in its training set to effectively copy into my application, but Java 11 combined with 10 year old libraries shouldn't be a problem if these coding LLMs are worth their salt.
Until LLMs stop making up language features, methods, and operators out of convenience, DSLs are here to stay.
People often use the analogy of LLMs being to high-level languages what compilers were for assembly languages, and despite being a terrible analogy there's no guarantee it won't eventually be largely true in practice. And if it does come true, consider how the advent of the compiler completely eliminated any incentive to improve the ergonomics or usability of assembly code, which has been and continues to be absolute crap, because who cares? That could be the grim future for high-level languages; this may be the end of the line.
A big difference is that compilers are deterministic, and coders generally don't review and patch the generated assembly. There's little reason to expect that LLMs will ever function like that. It's always going to be a back-and-forth of, "hey LLM code this up", "no, function f isn't quite right; do this instead", etc.
This mimics what you see in, say, Photoshop. You can edit pixels manually, you can use deterministic tools, and you can use AI. If you care about the final result, you're probably going to use all three together.
I don't think we'll ever get to the point where we a-priori present a spec to an LLM and then not even look at the code, i.e. "English as a higher-level coding language". The reason is, code is simply more concise and explicit than trying to explain the logic in English in totality up-front.
For some things where you truly don't care about the details and have lots of flexibility, maybe English-as-code could be used like that, similar to image generation from a description. But I expect for most business-related use cases, the world is going to revolve around actual code for a long time.
I suspect the important sticking point will be reliability. The "incentive" exists because of an high degree of trust, so much so that "junior dev thinks it's a compiler bug" is a kind of joke.
If compilers had significant non-deterministic error rates with no reliable fix, that would probably be a rather different timeline.
We got LLVM IIR which is sort of like… similar-ish to assembly but better and more portable, right? Maybe some observation could be made there—it is something that does a similar job, but does it in a way that is better for the job that actually remains.
An ignorant perspective, from someone likely hasn't coded assembly ever. Assembly is tied to the system you target and it can't really be "improved". You can however improve ergonomics greatly via macros and everyone does this.
Good. I'll chalk that up as one of the positive effects LLMs have on the software development environment (god knows there are few enough).
DSL proliferation is a problem. I know this is not something many people care to hear, and I symphasize with that. Smart people are drawn to complexity and elegance, smart people like building solutions, and DSLs are complex and elegant solutions. I get it.
Problem is: Too many solutions create complexity, and complexity is the eternal enemy of [Grug][1]
Not every other problem domain needs its own language, and existing languages are designed to be adapted for many different problem domains. If LLMs help to stifle the wild growth of at least some DSLs that would otherwise be, then I am reasonably okay with that.
This feels like survivorship bias. Many of those older tools seem like they were once fancy new DSLs. We just respect them now as established, because they've been around for so long. But for every one thousand awkward DSLs that didn't make it, one new tool emerged which lifts software development to a new level.
Would you say the same about a parallel universe where LLMs were introduced in 1960?
> Language Design Direction 1: Teaching LLMs about DSLs (through Python?)
This is what I've been focused on last few years with a bit of Direction 3 via
python -> smt2 -> z3 -> verified rust
Perhaps a diffusion model for programming can be thought of as:
requirements -> design -> design by contract -> subset of python -> gc capable language (a fork of golang with ML features?) -> low level compiled language (rust, zig or C++)
As you go from left to right, there is an increasing level of detail the programmer has to worry about. The trick is to pick the right level of detail for a task.
Me thinking, is it time for a new 'programming' language for LLM to use instead of tools API calls? Something high level with loose grammar, in between natural language and strict programming. Then the backend, may be another smaller model, translates it in API calls. With this approach backend can be improved and updated much faster and cheaper then LLM model.
Coincidentally, I released a DSL last week called Hypershell [1], a Rust-based domain-specific language for shell scripting at the type level. While writing the blog post, I found myself wondering: will this kind of DSL be easier for LLMs to use than for humans?
In an initial experiment, I found that LLMs could translate familiar shell scripting concepts into Hypershell syntax reasonably well. More interestingly, they were able to fix common issues like type mismatches, especially when given light guidance or examples. That’s a big deal, because, like many embedded DSLs, Hypershell produces verbose and noisy compiler errors. Surprisingly, the LLM could often identify the underlying cause hidden in that mess and make the right correction.
This opens up a compelling possibility: LLMs could help bridge the usability gap that often prevents embedded DSLs from being more widely adopted. Debuggability is often the Achilles' heel of such languages, and LLMs seem capable of mitigating that, at least in simple cases.
More broadly, I think DSLs are poised to play a much larger role in AI-assisted development. They can be designed to sit closer to natural language while retaining strong domain-specific semantics. And LLMs appear to pick them up quickly, as long as they're given the right examples or docs to work with.
Your experience with Hypershell points to an interesting possibility: LLMs as DSL translators rather than replacements. This could actually democratize DSLs by lowering the learning curve while preserving their domain-specific benefits. The real opportunity might be DSLs optimized for both human semantics and machine translation.
We're making a prompting DSL (BAML https://github.com/BoundaryML/baml) and what we've found is that all the syntax rules can easily be encoded into a Cursor Rules file, which we find LLMs can follow nicely. DSLs are simple by nature so there's not too many rules to define.
Here's the cursor rules file we give folks: gist.github.com/aaronvg/b4f590f59b13dcfd79721239128ec208
I've been thinking about the impact on visual programming. I've believed for a long time that any visual programming environment should have flawless round-tripping with a human readable/writable text representation (for many reasons - version control, automation, leveraging decades of tooling around text files, the fact that some tasks are just easier with text)
Unfortunately, English-as-a-programming-language * is now a thing and there will be a lot of bad/dangerous/untested code being used in real situations going forward.
* Not just English, substitute any other human language into the above
One of my big concerns (a little tangential) is that LLMs will have the effect of fixing programming language design and the current language landscape into stone. This could occur in proportion to their use by programmers. The languages that LLMs do the best and have in their training data will be the languages programmers use, and getting any new language into LLM data sets will be very hard.
I've been working on a programming language for about a year which aims to replace Bash for scripting but is far closer to Python. It's something I hope to see used by many other people in the future, but a very common objection I hear from people I pitch it to is "yeah but an LLM could just generate me a Python script to do this, sure it might be uglier, twice as long, and not work quite as well, but it saved me from learning a new language and is probably fine", to which I have lots of counters on why that's a flawed argument, but it still demonstrates what I think is an increase in people's skepticism towards new languages which will contribute to the stagnation the author is talking about. If you're writing a new language, it's demotivating to see people's receptiveness to something new diminish.
I don't blame anyone in the picture, I don't disagree that time saved with LLMs can be well worth it, but it still is a topic I think we in the PL community need to wrestle more with.
I've been drafting a blog post on this as well. My take is that programming langauges largely evolve around "human" ergonomics and solve for "humans writing the code", but that can result in code that is too abstract and non-performant. I think where LLMs will succeed (more) is in writing _very dumb verbose code_ that can be easily optimized by the compiler.
What humans look at and what an AI looks at right now are similar only by circumstance, and what I sort of expect is that you start seeing something more like a "structure editor" that expresses underlying "dumb" code in a more abstract way such that humans can refactor it effectively, but what the human sees/edits isn't literally what the code "is".
I am not a fan of DSLs. Perhaps there are use cases where they are the best tool, but in general they impose a significant learning burden on those who join a project that uses a DSL. I’ve seen several DSL projects wither and die because no one wanted to learn the DSL because the knowledge and time investment did not transfer forward to anything else they would do in the future. My personal opinion is that DSLs are vanity projects; one can usually come very close to DSL clarity and simplicity by adding appropriate methods or functions. You just don’t get fancy syntactic sugar.
Maybe DSLs are “write-only” languages for humans.
I don’t wish ill or sadness on anyone but it doesn’t bother me at all if LLMs drive DSLs into extinction.
IMO, the best way to approach a DSL is simply using existing languages with a fairly flexible syntax. We've done this with groovy and it's worked quiet well. If I were to do it again I'd probably pick something like kotlin or ruby instead just because they both seem to have more industry relevance.
The beauty of picking an existing language as the base is you often get an expansive standard library from the get-go. That means your job as a "DSL" writer is more based on making sure you provide the value adds that make sense for the writers of that DSL.
It's worked particularly well for us because we have a data intake pipeline that has to parse and handle all sorts of random garbage (emails, excel docs, csv files, pdfs, etc).
A language like groovy, ruby, and kotlin all work well because it's trivial to add extensions to the syntax in a way that makes sense for your domain problem. Typescript also wouldn't be a bad choice for similar reasons, the only reason I wouldn't consider it is we run a JVM backend and parsing typescript for the JVM is somewhat of a PITA.
On the one hand, this sucks. On the other hand, we're already vacillating along the Pareto frontier of how much we can stuff into code; in fact, most of the criticisms of DSLs are indirectly stating just that.
So with LLMs making it easier to project back and forth between how programmer sees the task at hand, and the underlying dumb/straightforward code they ain't gonna read anyway, maybe we'll finally get to the point of addressing the actual problem of programming language design, which is that you cannot optimize for every task and cross-cutting concern at the same time and expect improvement across the board - we're already at the limit, we're just changing which time of day/part of the project will be more frustrating.
This sounds insightful but I can't make heads or tails of "you cannot optimize for every task and cross-cutting concern at the same time and expect improvement across the board".
Programming languages researchers and designers labor under the mistaken assumption that programming practitioners--people who are writing programs to solve problems--actually want "a language with a syntax and semantics tailored for a specific domain", or any really fancy language features at all.
I say this from the perspective of someone who nearly became a PL researcher myself. I could easily have decided to study programming languages for my PhD. Back then I was delighted by learning about cool new languages and language features.
But I did didn't study PL but rather ML, and then I went into industry and became a programming practitioner, rather than a PL researcher. I don't want a custom-designed ML programming language. I want a simple general-purpose language with good libraries that lets me quickly build the things I need to build. (coughPythoncoughcough)
Now that I have reached an age where I am aware of the finiteness of my time left in this universe, my reaction when I encounter cool new languages and language features now my is to wonder if they will be worth learning. Will the promised productivity gains allow me to recoup the cost of the time spent learning. My usual assessment is "probably not" (although now and then something worthwhile does come along).
I think that there is a very real chance that the idea of specialized programming languages will indeed disappear in the LLM era, as well as the need for various "ergonomic" features of general purpose languages that exist only to make it possible to express complex things in fewer lines of code. Will any of that be needed if the LLM can just write the code with what it has?
30 lines are always going to be easier to read/write/debug than 3000 lines, so it'll probably remain easier (for both humans and machines) to write correct code in languages that make it possible to express ideas concisely and elegantly.
> Will the promised productivity gains allow me to recoup the cost of the time spent learning.
Some deep PL stuff I doubt there is productivity gain to begin with. But many ideas in the ML language family are simple and reduce debugging pain. Time lost from one encounter with muddy JS/Python semantics is more than the time learning about sum types.
That sounds grim, but not implausible. Domain-specific languages seemed like a significant improvement over general-purpose languages plus libraries. But now that we have a tool that lets you make Jazz Hands at your computer and have it spit out something that does most of what you want, do they really help?
Maybe some boring, kind-of-consistent language like C, Python, or Go is good enough. An LLM spits out a pile of code in one or more of them that does most of what you want, and you can fix it because it's less opaque than assembly. It doesn't sound like a job I'd want, but maybe that's just the way things will go.
Embedded DSLs (e.g. PyTorch) have been hugely successful in the field of machine learning, so I think there is a bit of nuance here that you're not considering.
I also take issue with the idea that Python is simple. Python's semantics are anything but. The biggest issue the language has, performance, is a consequence of these poorly thought out semantics. If the language was actually simple it would be a lot easier to build a faster implementation.
> Let's start with what I see as the biggest problem that the introduction of LLMs is presenting to language design: everything is easier in Python.
This is so true.
A couple months ago I was trying to use LLMs to come up with code to parse some semi-structured textual data based on a brief description from the user.
I didn't want to just ask the LLM to extract the information in a structured format as this would make it extremely slow when there's a lot data to parse.
My idea was, why not ask the LLM to come with a script that does the job. Kind of "compiling" what the user asks into a deterministic piece of code that will also be efficient. The LLM just has to figure out the structure and write some code to exploit it.
I also had the bright idea to define a DSL for parsing, instead of asking the LLM to write a python script. A simple DSL for a very specific task should be better than using something like Python in terms of generating correct scripts.
I defined the DSL, created the grammar and an interpreter and I started feeding the grammar definition to the LLM when I was prompting it to do the work I needed.
The result was underwhelming and also hilarious at some times. When I decided to build a loop and feed the model with the errors and ask to correct the script, I ended up sometimes having the model returning back python scripts, ignoring completely the instructions.
As the author said, everything is easier in Python, especially if you are a large language model!
Using an LLM to generate code is not an easily traceable and explainable process. Using a DSL to same ends is. PL research has yet to meet explainability in AI head on.
It's it just me or does the graph of LLM language performance versus training set size show the opposite of what they are saying? To me it looks flat, implying training set size has little influence on LLM performance in the language. For instance some niche languages appear to out-perform better known languages (with more variance in the niche language performance).
It is not only you, but I think it is only you and me. I've also skimmed through the comments and wondering if they are AI generated; or the people even read the article. The author essentially took a graph, and then claimed a different interpretation to reality.
What the graph shows is that LLMs struggle with "hard" languages (Rust, Go, C#) with the exception of Ruby.
Why care what others are doing? Just do what makes sense for your domain and don't worry about what is hot. Who cares?
If something is useful people will use it. Just because it seems like llms are everywhere, not everyone cares. I wouldn't want vibe coders to be my target audience anyway.
Python is an acceptable though not perfect substrate for developing embedded DSLs. It's dynamic enough that you can do a lot of things. Besides operator overloading which is commonly used in C++ for eDSLs, you can even write decorators that take the AST, completely regenerate new code via LLVM or something similar. This is the approach used by numba for JIT for example.
In the end I think mentioning Python is a red herring. You can produce an eDSL in Python that's not in LLM training data so difficult for LLMs to grok, and yet still perfectly valid Python. The deeper issue here is that even if you use Python, LLMs are restricting people to use a small subset of what Python is even capable of.
FWIW, the core assertion here isn't even LLM-specific. DSL design leans heavily on the idea of an expert author who understands the underlying data model well already. That is no less true in the meatspace world than it is for an AI.
DSLs look great if they let you write the code you already know how to write faster. DSLs look like noise to everyone else, including Gemini and Claude.
I used to be a big DSL booster in my youth. No longer. Once you need to stop what you're doing and figure out your ninth or eleventh oddball syntax, you realize that (as per the article) Everything is Easier in Python.
yep, im still mad pg SOLD US A LIE (use a secret weapon ancient language in an unmaintainable way that no one wants in the workplace and become a gazillionaire). But gullible people are easily misled (see cults etc).
Exactly right. Now that we're in the era of LLMs and Coding Agents it's never been more clear that DSLs should be avoided; because LLMs cannot reason about them as well as popular languages, and that's just a fact. You don't need to dig any further, to think about pros and cons, imo.
The fewer languages there are in the world (as a general rule) the better off everyone is. We do need a low level language like C++ to exist and a high level one like TypeScript, but we don't _need_ multiple of each. The fact that there are already multiple of each is a challenge to be dealt with, and not a goal what we reached on purpose.
> a DSL requires not only the investment of build and design the language and tooling itself
Not necessarily true. There are two kinds of DSLs: external and internal.
An external DSL has its own tooling, parser, etc. The nix language, for example.
An internal DSL is like a small parasite that lives inside an existing language, reusing some of its syntax and tools. It's almost like intentional pareidolia. Like jQuery, for example.
Internal DSLs reduce the cognitive load, and in my opinion, they're the best kind of DSL.
So many unsuccessful (for some definition of the word) pet projects are on GitHub. The code techniques employed there are valid, working software, that aren't found in CRM, CRUD, middleware or data entry software that makes up most of the world's portfolio to date and that makes them the most valuable addition to LLM training data. Arguing against making such projects is insanity.
Articles in the era of LLMs: assume endless torrent of LLM code generation forever, insert how will this affect X now that we have our foregone conclusion.
Hopefully, this will be an interim concern. While currently vapor, a future LLM might propose and implement effective DSLs when requested. Context windows are increasing and perhaps contemporary LLMs could code for niche languages better with appropriate prompting. However, the cultural, network effects described here are concerning.
For LLMs, programming languages are basically additional languages that we speak. So how it handles low-resource programming languages is same as how it handles speaking languages with less contribution in training data?
DSL's would be even harder for LLM's to get right in that case compared to the low-resource language itself
Depending on the size of a DSL all the more recent LLMs can be employed to work with them. LoRA/Finetuning are a heavier option, followed by RAG, and just setting them up as a big system prompt and caching. And once a model is able to work with a DSL tokens used in valuable code creation can dramatically drop.
Python is just a beautiful, well-designed language - in an era where LLM's generate code, it is kind of reassuring that they mostly generate beautiful code and Python has risen to the top. If you look at the graph, Julia and Lua also do incredibly well, despite being a minuscule fraction of the training data.
But Python/Julia/Lua are by no means the most natural languages - what is natural is what people write before the LLM, the stuff that the LLM translates into Python. And it is hard to get a good look at these "raw prompts" as the LLM companies are keeping these datasets closely guarded, but from HumanEval and MBPP+ and YouTube videos of people vibe coding and such, it is clear that it is mostly English prose, with occasional formulas and code snippets thrown in, and also it is not "ugly" text but generally pre-processed through an LLM. So from my perspective the next step is to switch from Python as the source language to prompts as the source language - integrating LLM's into the compilation pipeline is a logical step. But, currently, they are too expensive to use consistently, so this is blocked by hardware development economics.
mhhm yes yes. There's a thread of discussion that I didn't quite chose to delve into in the post, but there is something interesting to be found in the observation that languages that are close to natural language (Python being famous for being almost executable pseudo-code for a while) being easier for LLMs to generate.
Maybe designing new languages to be close to pseudo-code might lead to better results in terms of asking LLMs to generate them? but there's also a fear that maybe prose-like syntax might not be the most appropriate for some problem domains.
In the same way LLVM is used to forward port low level optimizations to new languages, I wonder if LLMs can interpret new DSLs through the LLVM (or similar) lens and provide value.
I suppose this could be done now for all the existing languages that target LLVM and unify the training set across languages.
I think designers will end up needing to work with an existing LLM or even provide their own LLM to get developers to adopt a new language/library/feature. The market has been heading in that direction for a while now, with developers expecting official docs and tutorials.
Interesting, I tend to think design is a dying job now. It's not that I don't see value in designers, but if I am being honest, when they aren't advocating for a complete UI refresh, there isn't a whole lot for them to do. IMHO this is why we see all the major apps refreshing their UIs every 9 to 12 months. It is unnecessary and aggravating to users, and if we can get away from that I think we would be better off.
Since I work on a language professionally, I think about this all the time.
As someone who loves a wide diversity of actively evolving programming languages, it makes me sad to think those days of innovation may be ending. But I hope that's not going to happen.
It has always been the case that anyone designing a new language or adding features to an existing one is acutely mindful of what programming language knowledge is already in the heads of their users. The reason so many languages, say, use `;` for statement terminators is not because that syntax is particularly beautiful. It's just familiar.
At the same time, designers assume that giving users a better way to express something may be worth the cost of asking them to learn and adapt to the new way.
In theory, that should be true of LLMs as well. Yes, a new language feature may be hard to get the LLM to auto-complete. But if human users find that feature makes their code easier to read and maintain, they still want to use it. They will, and eventually it will percolate out into the ecosystem to get picked up the next time the LLMs are trained, in the same way that human users learn new language features by stumbling onto it in code in the wild.
So I'd like to believe that we'll continue to be able to push languages forward even in a world where a large fraction of code is written by machines. I also hope that LLM training cost goes down and frequency goes up, so that the lag behind what's out there in the world and what the LLMs know gets smaller over time.
But it's definitely possible that instead of that, we'll get a feedback loop where human users don't know a language feature even exists because the LLMs never generate code using it, and the LLMs never learn the feature exists because humans aren't writing it.
I have this same fear about, well, basically everything with LLMs: an endless feedback loop where humans get their "information" from LLMs and churn out content which the LLMs train on and the whole world wanders off into a hallucinatory bubble no longer grounded in reality. I don't know how to get people and/or the LLMs to touch grass to avoid that.
I do hope I get to work on making languages great for humans first, and for LLMs second. I'm way more excited to go to work making something that actual living breathing people use than as input data for a giant soulless matrix of floats.
I guess if you love writing DSLs this is an unfortunate development, but for me it's more of a glass half full: I can have the AI spit out boilerplate I need to solve a problem instead of spending a week building a one-off DSL compiler.
The benefit of (some) DSLs is that they make invalid states unrepresentable, which isn't possible with the entire surface-area of a programming language at your (or the LLM's) disposal.
personally i think DSLs could be helpful if they are really good at:
1. explaining the syntax clearly
2. providing a fast checker that provides good error messages
3. prevents errors
LLMs seem pretty good at figuring out these things when given a good feedback loop, and if the DSL truly makes complex programs easier to express, then LLMs could benefit from it too. Fewer lines of code can mean less context to write the program and understand it. But it has to be a good DSL and I wouldn't be surprised if many are just not worth it.
Since DSLs are necessarily niche, you're never going to have much training data in that language to feed into LLM training. Sure this problem can be overcome, but you're just creating more work than the time saved by having humans code in DSLs.
Replace “DSL” with “languages” in general - the same issues apply. I’m not sure that in a hypothetical timeline where Rust was released today, it would have gotten any traction.
Domain Specific Language. It's when you invent a tiny programming language with nouns and verbs that are appropriate for some niche. Like maybe for eg wedding planning you wouldn't use json or yaml, but some custom format that lets users define people and who has to sit where and where they can't sit without being professional programmers.
As AI systems improve, and especially as they add more 'self-play' in training, they might become really good at working in any language you can throw at some.
(To expand on the self-play aspect: when training you might want to create extra training data by randomly creating new 'fake' programming languages and letting it solve problems in them. It's just another way to add more training data.)
In any case, if you use an embedded DSL, like is already commonly done in Haskell, the LLMs should still give you good performance. In some sense, an 'embedded DSL' is just fancy name for a library in a specific style.
I'd be interested to hear how you find staying away from them as the years progress.
My experience so far is that they write mediocre code which is very often correct, and is relatively easy to review and improve. Of course I work with languages like elixir, python, typescript, and SQL - all of which LLMs are very good at.
Without a doubt I've seen a significant increase in the amount of work I can produce. As far as I can tell the defect rate in my work hasn't changed. But the way I work has, I'm now reviewing and refactoring significantly more than before and hand writing a lot less.
To be honest, I'd worry about someone's ability to compete in the job market if they resisted for much longer. With the obvious exceptions of spaces where LLMs can't be used, or have very poor performance.
If someone's not using LLMs yet in 2025 to write code they're basically Amish.
They're riding a horse in the age of automobiles, just because they think they're more comfortable on horseback, while they've never been in a car even once.
A lot of features you take granted in new languages come from academic programming language research. Generics in Java for example came from GJ, an academic programming language research project headed by academics including Philip Wadler.
Its primary point is that TIOBE is based on *number* of search results on a weighted list of search engines, not actual usage in Github, search volume, job listings, or any of the other number of signals you'd expect a popularity index to use.
It could easily be indicating that Python articles are being generated by LLMs more than any other class of articles.
I'm sorry, I simply refuse to take seriously an outlet that publishes the following:
"""
Remarkably, SQL has started dropping slowly recently. This month it is at position #12, which is its lowest position in the TIOBE index ever. SQL will remain the backbone and lingua franca of databases for decades to come. However, in the booming field of AI, where data is usually unstructured, NoSQL databases are often a better fit. NoSQL (which uses data interchange formats such as JSON and XML) has become a serious threat for the well-defined but rather static SQL approach. NoSQL's popularity is comparable to the rise of dynamically typed languages such as Python if compared to well-defined statically typed programming languages such as C++ and Java.
"""
This is actually a great site; it feels much more representative of what I actually see in job ads and the real world than some other rankings. If all I did was browse HN all day I'd think Rust is the only language people use for new projects
NathanKP|8 months ago
To be honest I don't think this is necessarily a bad thing, but it does mean that there is a stifling effect on fresh new DSL's and frameworks. It isn't an unsolvable problem, particularly now that all the most popular coding agents have MCP support that allows you to bring in custom documentation context. However, there will always be a strong force in LLM's pushing users towards the runtimes and frameworks that have the most training data in the LLM.
crq-yml|8 months ago
The new way would be to build a disposable jig instead of a Swiss Army Knife: The LLM can be prompted into being enough of a DSL that you can stand up some placeholder code with it, supplemented with key elements that need a senior dev's touch.
The resulting code will look primitive and behave in primitive ways, which at the outset creates a myriad of inconsistency, but is OK for maintenance over the long run: primitive code is easy to "harvest" into abstract code, the reverse is not so simple.
pmontra|8 months ago
Seibel: When do you think was the last time that you programmed?
Allen: Oh, it was quite a while ago. I kind of stopped when C came out.
That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization.
The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue. The motivation for the design of C was three problems they couldn't solve in the high-level languages: One of them was interrupt handling. Another was scheduling resources, taking over the machine and scheduling a process that was in the queue. And a third one was allocating memory. And you couldn't do that from a high-level language.
So that was the excuse for C.
Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels?
Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve.
By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are . . . basically not taught much anymore in the colleges and universities.
gopiandcode|8 months ago
I'd argue that the problem of solving this effect in DSLs might be a bit harder than for frameworks, because DSLs can have wildly different semantics (imagine for example a logic programming DSL a la prolog, vs a functional DSL a la haskell), so these don't fit as nicely into the framework of MCPs maybe. I agree that it's not unsolvable though, but it definitely needs more research into.
scelerat|8 months ago
Is this an observation of a similar phenomenon?
jbreckmckye|8 months ago
I do. Would you really argue we discovered perfection in the first sixty years of computer science? In the first sixty years of chemistry we still believed in phlogiston
librasteve|8 months ago
winter_blue|8 months ago
Even perhaps training a separate new neural network to translate from Python/Java/etc to your new language.
unknown|8 months ago
[deleted]
jrmg|8 months ago
I’m not convinced simply getting the LLM to inject documentation about the features will work well (perhaps someone has studied this?) because the reason they’re good at doing ‘well known’ things is the plethora of actual examples they’re trained on.
guywithahat|8 months ago
romaniv|8 months ago
It is significant that LLMs in coding are being promoted based on a set of promises (and assumptions) that are getting instantly and completely reversed the moment the technology gets an iota of social adoption in some space.
"Everyone can code now!" -> "Everyone must learn a highly specialized set of techniques to prompt, test generated code, etc."
"LLMs are smart and can effortlessly interface with pre-existing technologies" -> "You must adopt these agent protocols, now"
"LLMs are great at 0-shot learning" -> "I will not use this language/library/version of tool, because my model isn't trained on its examples"
"LLMs effortlessly understand existing code" -> "You must change your code specifically to be understood by LLMs"
This is getting rather ridiculous.
oehpr|8 months ago
https://upload.wikimedia.org/wikipedia/commons/9/94/Gartner_...
swyx|8 months ago
furyofantares|8 months ago
> Suddenly the opportunity cost for a DSL has just doubled: in the land of LLMs, a DSL requires not only the investment of build and design the language and tooling itself, but the end users will have to sacrifice the use of LLMs to generate any code for your DSL.
I don't think they will. Provide a concise description + examples for your DSL and the LLM will excel at writing within your DSL. Agents even moreso if you can provide errors. I mean, I guess the article kinda goes in that direction.
But also authoring DSLs is something LLMs can assist with better than most programming tasks. LLMs are pretty great at producing code that's largely just a data pipeline.
gopiandcode|8 months ago
Examples of domains that might be more challenging to design DSLs for: languages for knitting, non-deterministic languages to represent streaming etc. (i.e https://pldi25.sigplan.org/details/pldi-2025-papers/50/Funct... )
My main concern is that LLMs might excel at the mundane tasks, but struggle at the more exciting advances, and so now the activation energy for coming up with advances DSLs is going to increase and as a result, the field might stagnate.
daxfohl|8 months ago
loa_in_|8 months ago
neilv|8 months ago
To add to that... One limitation of LLM for a new DSL is that the LLM may be less likely to directly plagiarize from open source code. That could be a feature.
Another feature could be users doing their own work, and doing a better job of it, instead of "cheating on their homework" with AI slop and plagiarism, whether for school or in the workplace.
NiloCK|8 months ago
At the time, I had given in to Claude 3.5's preference for python when spinning up my first substantive vibe-coded app. I'd never written a line of python before or since, but I just let the waves carry me. Claude and I vibed ourselves into a corner, and given my ignorance, I gave up on fixing things and declared the software done as-is. I'm now the proud owner of a tiny monstrosity that I completely depend on - my own local whisper dictation app with a system tray.
I've continued to think about stack ossification since. Still feels possible, given my recent frustration trying to use animejs v4 via an LLMs. There's a substantial api change between animejs v3 and v4, and no amount of direction or documentation placed in context could stop models from writing against the v3 api.
I see two ways out of the ossification attractor.
The obvious, passive, way out: frontier models cross a chasm with respect to 'putting aside' internalized knowledge (from the training data) in favor of in-context directions or some documentation-RAG solutions. I'm not terribly optimistic here - these models are hip-shooters by nature, and it feels to me that as they get smarter, this reflex feels stronger rather than weaker. Though: Sonnet 4 is generally a better instruction-follower than 3.7, so maybe.
The less obvious way out, which I hope someone is working on, is something like massive model-merging based on many cached micro fine-tunes against specific dependency versions, so that each workspace context can call out to modestly customized LLMs (LoRA style) where usage of incorrect versions of your dependencies has specifically been fine-tuned out.
darepublic|8 months ago
TimTheTinker|8 months ago
Consider MiniZinc. This DSL is super cool and useful for writing constraint-solving problems once and running them through any number of different backend solvers.
A lot of intermediate languages and bytecode (including LLVM itself) are very useful DSLs for representing low-level operations using a well-defined set of primitives.
Codegen DSLs are also amazing for some applications, especially for creating custom boilerplate -- write what's unique to the scenario at hand in the DSL and have the template-based codegen use the provided data to generate code in the target language. This can be a highly flexible approach, and is just one of several types of language-oriented programming (LOP).
iguessthislldo|8 months ago
jo32|8 months ago
jeroenhd|8 months ago
Until LLMs stop making up language features, methods, and operators out of convenience, DSLs are here to stay.
kibwen|8 months ago
daxfohl|8 months ago
This mimics what you see in, say, Photoshop. You can edit pixels manually, you can use deterministic tools, and you can use AI. If you care about the final result, you're probably going to use all three together.
I don't think we'll ever get to the point where we a-priori present a spec to an LLM and then not even look at the code, i.e. "English as a higher-level coding language". The reason is, code is simply more concise and explicit than trying to explain the logic in English in totality up-front.
For some things where you truly don't care about the details and have lots of flexibility, maybe English-as-code could be used like that, similar to image generation from a description. But I expect for most business-related use cases, the world is going to revolve around actual code for a long time.
Terr_|8 months ago
If compilers had significant non-deterministic error rates with no reliable fix, that would probably be a rather different timeline.
bee_rider|8 months ago
noobermin|8 months ago
usrbinbash|8 months ago
DSL proliferation is a problem. I know this is not something many people care to hear, and I symphasize with that. Smart people are drawn to complexity and elegance, smart people like building solutions, and DSLs are complex and elegant solutions. I get it.
Problem is: Too many solutions create complexity, and complexity is the eternal enemy of [Grug][1]
Not every other problem domain needs its own language, and existing languages are designed to be adapted for many different problem domains. If LLMs help to stifle the wild growth of at least some DSLs that would otherwise be, then I am reasonably okay with that.
[1]: https://grugbrain.dev
nothrabannosir|8 months ago
Would you say the same about a parallel universe where LLMs were introduced in 1960?
adsharma|8 months ago
This is what I've been focused on last few years with a bit of Direction 3 via
Perhaps a diffusion model for programming can be thought of as:requirements -> design -> design by contract -> subset of python -> gc capable language (a fork of golang with ML features?) -> low level compiled language (rust, zig or C++)
As you go from left to right, there is an increasing level of detail the programmer has to worry about. The trick is to pick the right level of detail for a task.
Previous writing: https://adsharma.github.io/agentic-transpilers/
MoonGhost|8 months ago
cpeterso|8 months ago
maybevoid|8 months ago
In an initial experiment, I found that LLMs could translate familiar shell scripting concepts into Hypershell syntax reasonably well. More interestingly, they were able to fix common issues like type mismatches, especially when given light guidance or examples. That’s a big deal, because, like many embedded DSLs, Hypershell produces verbose and noisy compiler errors. Surprisingly, the LLM could often identify the underlying cause hidden in that mess and make the right correction.
This opens up a compelling possibility: LLMs could help bridge the usability gap that often prevents embedded DSLs from being more widely adopted. Debuggability is often the Achilles' heel of such languages, and LLMs seem capable of mitigating that, at least in simple cases.
More broadly, I think DSLs are poised to play a much larger role in AI-assisted development. They can be designed to sit closer to natural language while retaining strong domain-specific semantics. And LLMs appear to pick them up quickly, as long as they're given the right examples or docs to work with.
[1] https://contextgeneric.dev/blog/hypershell-release/
ethan_smith|8 months ago
aaronvg|8 months ago
Here's the cursor rules file we give folks: gist.github.com/aaronvg/b4f590f59b13dcfd79721239128ec208
mbokinala|8 months ago
andybak|8 months ago
LLMs just add another reason to this list.
boznz|8 months ago
* Not just English, substitute any other human language into the above
api|8 months ago
amterp|8 months ago
I don't blame anyone in the picture, I don't disagree that time saved with LLMs can be well worth it, but it still is a topic I think we in the PL community need to wrestle more with.
nylonstrung|8 months ago
LLMs are surprisingly bad at bash and apparently very bad at Powershell
Pythonic shell scripting is well suited to their language biases right now
kkukshtel|8 months ago
What humans look at and what an AI looks at right now are similar only by circumstance, and what I sort of expect is that you start seeing something more like a "structure editor" that expresses underlying "dumb" code in a more abstract way such that humans can refactor it effectively, but what the human sees/edits isn't literally what the code "is".
IDK it's not written yet but when it is it will be here: https://kylekukshtel.com/llms-programming-language-design
efitz|8 months ago
Maybe DSLs are “write-only” languages for humans.
I don’t wish ill or sadness on anyone but it doesn’t bother me at all if LLMs drive DSLs into extinction.
cogman10|8 months ago
The beauty of picking an existing language as the base is you often get an expansive standard library from the get-go. That means your job as a "DSL" writer is more based on making sure you provide the value adds that make sense for the writers of that DSL.
It's worked particularly well for us because we have a data intake pipeline that has to parse and handle all sorts of random garbage (emails, excel docs, csv files, pdfs, etc).
A language like groovy, ruby, and kotlin all work well because it's trivial to add extensions to the syntax in a way that makes sense for your domain problem. Typescript also wouldn't be a bad choice for similar reasons, the only reason I wouldn't consider it is we run a JVM backend and parsing typescript for the JVM is somewhat of a PITA.
TeMPOraL|8 months ago
So with LLMs making it easier to project back and forth between how programmer sees the task at hand, and the underlying dumb/straightforward code they ain't gonna read anyway, maybe we'll finally get to the point of addressing the actual problem of programming language design, which is that you cannot optimize for every task and cross-cutting concern at the same time and expect improvement across the board - we're already at the limit, we're just changing which time of day/part of the project will be more frustrating.
guelo|8 months ago
Can someone help me out?
jp57|8 months ago
I say this from the perspective of someone who nearly became a PL researcher myself. I could easily have decided to study programming languages for my PhD. Back then I was delighted by learning about cool new languages and language features.
But I did didn't study PL but rather ML, and then I went into industry and became a programming practitioner, rather than a PL researcher. I don't want a custom-designed ML programming language. I want a simple general-purpose language with good libraries that lets me quickly build the things I need to build. (coughPythoncoughcough)
Now that I have reached an age where I am aware of the finiteness of my time left in this universe, my reaction when I encounter cool new languages and language features now my is to wonder if they will be worth learning. Will the promised productivity gains allow me to recoup the cost of the time spent learning. My usual assessment is "probably not" (although now and then something worthwhile does come along).
I think that there is a very real chance that the idea of specialized programming languages will indeed disappear in the LLM era, as well as the need for various "ergonomic" features of general purpose languages that exist only to make it possible to express complex things in fewer lines of code. Will any of that be needed if the LLM can just write the code with what it has?
izabera|8 months ago
ackfoobar|8 months ago
Some deep PL stuff I doubt there is productivity gain to begin with. But many ideas in the ML language family are simple and reduce debugging pain. Time lost from one encounter with muddy JS/Python semantics is more than the time learning about sum types.
username223|8 months ago
Maybe some boring, kind-of-consistent language like C, Python, or Go is good enough. An LLM spits out a pile of code in one or more of them that does most of what you want, and you can fix it because it's less opaque than assembly. It doesn't sound like a job I'd want, but maybe that's just the way things will go.
AnimalMuppet|8 months ago
I wonder if we need a language designed to be easier for an AI to reason about, or easier for a human to see the AI's mistakes.
noelwelsh|8 months ago
I also take issue with the idea that Python is simple. Python's semantics are anything but. The biggest issue the language has, performance, is a consequence of these poorly thought out semantics. If the language was actually simple it would be a lot easier to build a faster implementation.
cpard|8 months ago
This is so true.
A couple months ago I was trying to use LLMs to come up with code to parse some semi-structured textual data based on a brief description from the user.
I didn't want to just ask the LLM to extract the information in a structured format as this would make it extremely slow when there's a lot data to parse.
My idea was, why not ask the LLM to come with a script that does the job. Kind of "compiling" what the user asks into a deterministic piece of code that will also be efficient. The LLM just has to figure out the structure and write some code to exploit it.
I also had the bright idea to define a DSL for parsing, instead of asking the LLM to write a python script. A simple DSL for a very specific task should be better than using something like Python in terms of generating correct scripts.
I defined the DSL, created the grammar and an interpreter and I started feeding the grammar definition to the LLM when I was prompting it to do the work I needed.
The result was underwhelming and also hilarious at some times. When I decided to build a loop and feed the model with the errors and ask to correct the script, I ended up sometimes having the model returning back python scripts, ignoring completely the instructions.
As the author said, everything is easier in Python, especially if you are a large language model!
oleks|8 months ago
chr15m|8 months ago
csomar|8 months ago
What the graph shows is that LLMs struggle with "hard" languages (Rust, Go, C#) with the exception of Ruby.
noobermin|8 months ago
If something is useful people will use it. Just because it seems like llms are everywhere, not everyone cares. I wouldn't want vibe coders to be my target audience anyway.
kccqzy|8 months ago
In the end I think mentioning Python is a red herring. You can produce an eDSL in Python that's not in LLM training data so difficult for LLMs to grok, and yet still perfectly valid Python. The deeper issue here is that even if you use Python, LLMs are restricting people to use a small subset of what Python is even capable of.
ajross|8 months ago
DSLs look great if they let you write the code you already know how to write faster. DSLs look like noise to everyone else, including Gemini and Claude.
I used to be a big DSL booster in my youth. No longer. Once you need to stop what you're doing and figure out your ninth or eleventh oddball syntax, you realize that (as per the article) Everything is Easier in Python.
fud101|8 months ago
quantadev|8 months ago
Exactly right. Now that we're in the era of LLMs and Coding Agents it's never been more clear that DSLs should be avoided; because LLMs cannot reason about them as well as popular languages, and that's just a fact. You don't need to dig any further, to think about pros and cons, imo.
The fewer languages there are in the world (as a general rule) the better off everyone is. We do need a low level language like C++ to exist and a high level one like TypeScript, but we don't _need_ multiple of each. The fact that there are already multiple of each is a challenge to be dealt with, and not a goal what we reached on purpose.
alganet|8 months ago
Not necessarily true. There are two kinds of DSLs: external and internal.
An external DSL has its own tooling, parser, etc. The nix language, for example.
An internal DSL is like a small parasite that lives inside an existing language, reusing some of its syntax and tools. It's almost like intentional pareidolia. Like jQuery, for example.
Internal DSLs reduce the cognitive load, and in my opinion, they're the best kind of DSL.
loa_in_|8 months ago
keybored|8 months ago
waffletower|8 months ago
prats226|8 months ago
DSL's would be even harder for LLM's to get right in that case compared to the low-resource language itself
outofpaper|8 months ago
Mathnerd314|8 months ago
But Python/Julia/Lua are by no means the most natural languages - what is natural is what people write before the LLM, the stuff that the LLM translates into Python. And it is hard to get a good look at these "raw prompts" as the LLM companies are keeping these datasets closely guarded, but from HumanEval and MBPP+ and YouTube videos of people vibe coding and such, it is clear that it is mostly English prose, with occasional formulas and code snippets thrown in, and also it is not "ugly" text but generally pre-processed through an LLM. So from my perspective the next step is to switch from Python as the source language to prompts as the source language - integrating LLM's into the compilation pipeline is a logical step. But, currently, they are too expensive to use consistently, so this is blocked by hardware development economics.
gopiandcode|8 months ago
Maybe designing new languages to be close to pseudo-code might lead to better results in terms of asking LLMs to generate them? but there's also a fear that maybe prose-like syntax might not be the most appropriate for some problem domains.
jayd16|8 months ago
I suppose this could be done now for all the existing languages that target LLVM and unify the training set across languages.
wolpoli|8 months ago
freedomben|8 months ago
munificent|8 months ago
As someone who loves a wide diversity of actively evolving programming languages, it makes me sad to think those days of innovation may be ending. But I hope that's not going to happen.
It has always been the case that anyone designing a new language or adding features to an existing one is acutely mindful of what programming language knowledge is already in the heads of their users. The reason so many languages, say, use `;` for statement terminators is not because that syntax is particularly beautiful. It's just familiar.
At the same time, designers assume that giving users a better way to express something may be worth the cost of asking them to learn and adapt to the new way.
In theory, that should be true of LLMs as well. Yes, a new language feature may be hard to get the LLM to auto-complete. But if human users find that feature makes their code easier to read and maintain, they still want to use it. They will, and eventually it will percolate out into the ecosystem to get picked up the next time the LLMs are trained, in the same way that human users learn new language features by stumbling onto it in code in the wild.
So I'd like to believe that we'll continue to be able to push languages forward even in a world where a large fraction of code is written by machines. I also hope that LLM training cost goes down and frequency goes up, so that the lag behind what's out there in the world and what the LLMs know gets smaller over time.
But it's definitely possible that instead of that, we'll get a feedback loop where human users don't know a language feature even exists because the LLMs never generate code using it, and the LLMs never learn the feature exists because humans aren't writing it.
I have this same fear about, well, basically everything with LLMs: an endless feedback loop where humans get their "information" from LLMs and churn out content which the LLMs train on and the whole world wanders off into a hallucinatory bubble no longer grounded in reality. I don't know how to get people and/or the LLMs to touch grass to avoid that.
I do hope I get to work on making languages great for humans first, and for LLMs second. I'm way more excited to go to work making something that actual living breathing people use than as input data for a giant soulless matrix of floats.
jbellis|8 months ago
mplanchard|8 months ago
dack|8 months ago
LLMs seem pretty good at figuring out these things when given a good feedback loop, and if the DSL truly makes complex programs easier to express, then LLMs could benefit from it too. Fewer lines of code can mean less context to write the program and understand it. But it has to be a good DSL and I wouldn't be surprised if many are just not worth it.
quantadev|8 months ago
nxobject|8 months ago
rramon|8 months ago
emiliobumachar|8 months ago
fragmede|8 months ago
unknown|8 months ago
[deleted]
eru|8 months ago
As AI systems improve, and especially as they add more 'self-play' in training, they might become really good at working in any language you can throw at some.
(To expand on the self-play aspect: when training you might want to create extra training data by randomly creating new 'fake' programming languages and letting it solve problems in them. It's just another way to add more training data.)
In any case, if you use an embedded DSL, like is already commonly done in Haskell, the LLMs should still give you good performance. In some sense, an 'embedded DSL' is just fancy name for a library in a specific style.
msgodel|8 months ago
zzo38computer|8 months ago
Also, domain-specific stuff can still be useful sometimes, and other stuff involved with designing a programming language.
bluehatbrit|8 months ago
My experience so far is that they write mediocre code which is very often correct, and is relatively easy to review and improve. Of course I work with languages like elixir, python, typescript, and SQL - all of which LLMs are very good at.
Without a doubt I've seen a significant increase in the amount of work I can produce. As far as I can tell the defect rate in my work hasn't changed. But the way I work has, I'm now reviewing and refactoring significantly more than before and hand writing a lot less.
To be honest, I'd worry about someone's ability to compete in the job market if they resisted for much longer. With the obvious exceptions of spaces where LLMs can't be used, or have very poor performance.
quantadev|8 months ago
They're riding a horse in the age of automobiles, just because they think they're more comfortable on horseback, while they've never been in a car even once.
tonetheman|8 months ago
[deleted]
diimdeep|8 months ago
kccqzy|8 months ago
averkepasa|8 months ago
arciini|8 months ago
Its primary point is that TIOBE is based on *number* of search results on a weighted list of search engines, not actual usage in Github, search volume, job listings, or any of the other number of signals you'd expect a popularity index to use.
It could easily be indicating that Python articles are being generated by LLMs more than any other class of articles.
qsort|8 months ago
""" Remarkably, SQL has started dropping slowly recently. This month it is at position #12, which is its lowest position in the TIOBE index ever. SQL will remain the backbone and lingua franca of databases for decades to come. However, in the booming field of AI, where data is usually unstructured, NoSQL databases are often a better fit. NoSQL (which uses data interchange formats such as JSON and XML) has become a serious threat for the well-defined but rather static SQL approach. NoSQL's popularity is comparable to the rise of dynamically typed languages such as Python if compared to well-defined statically typed programming languages such as C++ and Java. """
guywithahat|8 months ago
unknown|8 months ago
[deleted]