It's funny to me that people should look at this situation and say "this is OK".
The upshot of all these projects to make JS tools faster is a fractured ecosystem. Who if given the choice would honestly want to try to maintain Javascript tools written in a mixture of Rust and Go? Already we've seemingly committed to having a big schism in the middle. And the new tools don't replace the old ones, so to own your tools you'll need to make Rust, Go, and JS all work together using a mix of clean modern technology and shims into horribly legacy technology. We have to maintain everything, old and new, because it's all still critical, engineers have to learn everything, old and new, because it's all still critical.
* Bundlers drastically improve runtime performance, but it's tricky to figure out what to bundle where and how.
* Linting tools and type-safety checkers detect bugs before they happen, but they can be arbitrarily complex, and benefit from type annotations. (TypeScript won the type-annotation war in the marketplace against other competing type annotations, including Meta's Flow and Google's Closure Compiler.)
* Package installers are really important and a hugely complex problem in a performance-sensitive and security-sensitive area. (Managing dependency conflicts/diamonds, caching, platform-specific builds…)
As long as developers benefit from using bundlers, linters, type checkers, code formatters, and package installers, and as long as it's possible to make these tools faster and/or better, someone's going to try.
And here you are, incredulous that anyone thinks this is OK…? Because we should just … not use these tools? Not make them faster? Not improve their DX? Standardize on one and then staunchly refuse to improve it…?
> We have to maintain everything, old and new, because it's all still critical, engineers have to learn everything, old and new, because it's all still critical.
I completely agree but maintenance is a maintainer problem, not the consumer or user of the package, at least according to the average user of open source nowadays. One of two things are come out of this: either the wheels start falling off once the community can no longer maintain this fractured tooling as you point out, or companies are going to pick up the slack and start stewarding it (likely looking for opportunities to capture tooling and profit along the way).
I look at it and don't really have an issue with it. I have been using tsc, vite, eslint, and prettier for years. I am in the process of switching my projects to tsgo (which will soon be tsc anyway), oxlint, and oxfmt. It's not a big deal and it's well worth the 10x speed increase. It would be nice if there was one toolchain to rule them all, but that is just not the world we live in.
The good part is that the new tools do replace the old ones, while being compatible. The pattern is:
* Rolldown is compatible to Rollup's API and can use most Rollup plugins
* Oxlint supports JS plugins and is ESLint compatibel (can run ESLint rules easily)
* Oxfmt plans to support Prettier plugins, in turn using
the power of the ecosystem
* and so on...
So you get better performance and can still work with your favorite plugins and extend tools "as before".
Regarding the "mix of technology" or tooling fatigue: I get that. We have to install a lot of tools, even for a simple application. This is where Vite+[0] will shine, bringing the modern and powerful tools together, making them even easier to adopt and reducing the divide in the ecosystem.
It's definitely an explosion of complexity but also something that AI can help
manage. So :shrug: ...
Based on current trends, I don't think people care about knowing how all the parts work (even before these powerful LLMs came along) as long as the job gets done and things get shipped and it mostly works.
I'm very surprised the article doesn't mention Bun. Bun is significantly faster than Vite & Rolldown, if it's simply speed one is aiming for. More importantly Bun allows for simplicity. Install Bun, you get Bundler included and TypeScript just works, and it's blazing fast.
IMO Bun and Vite are best suited for slightly different things. Not to say that there isn't a lot of overlap, but if you don't need many of the features Bun provides, it can be a bit overkill.
Personally, I write a lot of Vue, so using a "first party" environment has a lot of advantages for me. Perhaps if you are a React developer, the swap might be even more straightforward.
I also think it's important to take into consideration the other two packages mentioned in this post (oxlint & oxfmt) because they are first class citizens in Vite (and soon to be Vite+). Bun might be a _technically_ faster dev server, but if your other tools are still slow, that might be a moot point.
Also, Typescript also "just works" in Vite as well. I have a project on work that is using `.ts` files without even an `tsconfig` file in the project.
It's been a while since I've tried it, but post-1.0 release of Bun still seemed like beta software and I would get all sorts of hard to understand errors while building a simple CRUD app. My impression from the project is the maintainers were adding so many features that they were spread too thin. Hopefully it's a little more stable now.
Bun and Vite are not really analogous. Bun includes features that overlap with Vite but Vite does a lot more. (It goes without saying that Bun also does things Vite doesn't do because Bun is a whole JS runtime.)
This smells of "I like to solve puzzles and fiddle with things" and reminds of hours spent satisfyingly tweaking my very specific and custom setups for various things technical.
I, too, like to fiddle with optimizations and tool configuration puzzles but I need to get things done and get them done now. It doesn't seem fast, it seems cumbersome and inconsistent.
> It doesn't seem fast, it seems cumbersome and inconsistent
I think the point of this project is to provide an opinionated set of templates aimed at shipping instead of tinkering, right? "Don't tinker with the backend frameworks, just use this and focus on building the business logic."
The bit about strict guardrails helping LLMs write better code matches what we have been seeing. We ran the same task in loose vs strict lint configurations and the output quality difference was noticeable.
What was surprising is that it wasn't just about catching errors after generation. The model seemed to anticipate the constraints and generated cleaner code from the start. My working theory is that strict, typed configs give the model a cleaner context to reason from, almost like telling it what good code looks like before it starts.
The piece I still haven't solved: even with perfect guardrails per file, models frequently lose track of cross-file invariants. You can have every individual component lint-clean and still end up with a codebase that silently breaks when components interact. That seems like the next layer of the problem.
We've been building our frontend with AI assistance and the bottleneck has shifted from writing code to reviewing it. Faster tooling helps, but I wonder if the next big gain is in tighter feedback loops — seeing your changes live as the AI generates them, rather than waiting for a full build cycle.
Kinda crazy that ts-node is still the recommendation when it hasn't been updated since 2023. And likewise crazy that no other lib has emerged that has typescript compilation and typechecking. Of course if it works, don't fix it, but typescript has evolved quite a bit since 2023.
Love that fact that you don't need anything ts-node/tsx like if you have erasable syntax only. Other than that, there is https://github.com/oxc-project/oxc-node too.
Any method for front end tooling is potentially the fastest. It always comes to what you measure and how you measure it. If you don't have any measures at all then your favorite method is always the fastest no matter what, because you live in a world without evidence.
Even after consideration of measurements radical performance improvements are most typically the result of the code's organization and techniques employed than the language its written in. But, of course, that cannot be validated without evidence from comparison of measurements.
The tragic part of all this is that everybody already knows this, but most front end developers do not measure things and may become hostile when measurements do occur that contradict their favorite techniques.
I have yet to meet a front-end dev that gets hostile when you show them how their code can be improved. On the contrary, the folks I have worked with are thrilled to improve their craft.
Unless of course you are not showing them improvements and are instead just shitting on their work. Yes, people do get hostile to that approach.
Any plans to create a combined server + web app template using @hono/vite-dev-server for local development, with both sides of auth preconfigured, with the server serving up the built web app in production?
I've used this setup for my last few projects and it's so painless, and with recent versions of Node.js which can strip TypeScript types I don't even need a build step for the server code.
Edit: oops, I didn't see nkzw-tech/fate-template, which has something like this, but running client and server separately instead
All y'all need more RAM in your development laptops. Maybe. At least, I've never been bothered by the performance of standard tooling like prettier, ESLint, and npm.
Can't wait for the first crypto-attack on a front-end JS library that's caused by a Go package vuln. God knows how `pnpm audit` will handle Go-module dependencies.
I'm confused by this, but also curious what we mean by "fastest".
In my experience, the bottleneck has always been backend dev and testing.
I was hoping "tooling" meant faster testing, not yet another layer of frontend dev. Frontend dev is pretty fast even when done completely by hand for the last decade or so. I have and have also seen others livecode on 15 minute calls with stakeholders or QA to mock some UI or debug. I've seen people deliver the final results from that meeting just a couple of hours later. I say this as in, that's what's going to prod minus some very specific edge case bugs that might even get argued away and never fixed.
Not trying to be defensive of pure human coding skills, but sometimes I wonder if we've rolled back expectations in the past few years. All this recent stuff seems even more complicated and more error prone, and frontend is already those things.
the ecosystem fragmentation thing hit me pretty hard when i was trying to set up a consistent linting workflow across a mono-repo last year. half the team already using biome, half still on eslint+prettier, and adding any shared tooling meant either duplicating config or just picking a side and upsetting someone
i get why the rust/go tools exist - the perf gains are measurable. but the cognitive overhead is real. new engineer joins, they now need 3 different mental models just to make a PR. not sure AI helps here either honestly, it just makes it easier to copy-paste configs you don't fully understand
They said at the time that Go let them keep the overall structure of the code, that is, they weren't trying to do a re-implementation from scratch, more of a port, and so the port was more straightforward with Go.
conartist6|12 days ago
The upshot of all these projects to make JS tools faster is a fractured ecosystem. Who if given the choice would honestly want to try to maintain Javascript tools written in a mixture of Rust and Go? Already we've seemingly committed to having a big schism in the middle. And the new tools don't replace the old ones, so to own your tools you'll need to make Rust, Go, and JS all work together using a mix of clean modern technology and shims into horribly legacy technology. We have to maintain everything, old and new, because it's all still critical, engineers have to learn everything, old and new, because it's all still critical.
All I really see is an explosion of complexity.
dfabulich|12 days ago
Each of these tools provides real value.
* Bundlers drastically improve runtime performance, but it's tricky to figure out what to bundle where and how.
* Linting tools and type-safety checkers detect bugs before they happen, but they can be arbitrarily complex, and benefit from type annotations. (TypeScript won the type-annotation war in the marketplace against other competing type annotations, including Meta's Flow and Google's Closure Compiler.)
* Code formatters automatically ensure consistent formatting.
* Package installers are really important and a hugely complex problem in a performance-sensitive and security-sensitive area. (Managing dependency conflicts/diamonds, caching, platform-specific builds…)
As long as developers benefit from using bundlers, linters, type checkers, code formatters, and package installers, and as long as it's possible to make these tools faster and/or better, someone's going to try.
And here you are, incredulous that anyone thinks this is OK…? Because we should just … not use these tools? Not make them faster? Not improve their DX? Standardize on one and then staunchly refuse to improve it…?
CodingJeebus|12 days ago
I completely agree but maintenance is a maintainer problem, not the consumer or user of the package, at least according to the average user of open source nowadays. One of two things are come out of this: either the wheels start falling off once the community can no longer maintain this fractured tooling as you point out, or companies are going to pick up the slack and start stewarding it (likely looking for opportunities to capture tooling and profit along the way).
Neither outcome looks particularly appealing.
dcre|12 days ago
TheAlexLichter|12 days ago
* Rolldown is compatible to Rollup's API and can use most Rollup plugins
* Oxlint supports JS plugins and is ESLint compatibel (can run ESLint rules easily)
* Oxfmt plans to support Prettier plugins, in turn using the power of the ecosystem
* and so on...
So you get better performance and can still work with your favorite plugins and extend tools "as before".
Regarding the "mix of technology" or tooling fatigue: I get that. We have to install a lot of tools, even for a simple application. This is where Vite+[0] will shine, bringing the modern and powerful tools together, making them even easier to adopt and reducing the divide in the ecosystem.
[0] https://viteplus.dev/
riskable|12 days ago
I thought this was the point of all development in the JavaScript/web ecosystem?
cod1r|12 days ago
Based on current trends, I don't think people care about knowing how all the parts work (even before these powerful LLMs came along) as long as the job gets done and things get shipped and it mostly works.
fsmedberg|12 days ago
yurishimo|12 days ago
Personally, I write a lot of Vue, so using a "first party" environment has a lot of advantages for me. Perhaps if you are a React developer, the swap might be even more straightforward.
I also think it's important to take into consideration the other two packages mentioned in this post (oxlint & oxfmt) because they are first class citizens in Vite (and soon to be Vite+). Bun might be a _technically_ faster dev server, but if your other tools are still slow, that might be a moot point.
Also, Typescript also "just works" in Vite as well. I have a project on work that is using `.ts` files without even an `tsconfig` file in the project.
https://vite.dev/guide/features#typescript
kevinfiol|12 days ago
dcre|12 days ago
canadiantim|12 days ago
gaoshan|12 days ago
I, too, like to fiddle with optimizations and tool configuration puzzles but I need to get things done and get them done now. It doesn't seem fast, it seems cumbersome and inconsistent.
ssgodderidge|12 days ago
I think the point of this project is to provide an opinionated set of templates aimed at shipping instead of tinkering, right? "Don't tinker with the backend frameworks, just use this and focus on building the business logic."
conradkay|12 days ago
EvgheniDem|12 days ago
What was surprising is that it wasn't just about catching errors after generation. The model seemed to anticipate the constraints and generated cleaner code from the start. My working theory is that strict, typed configs give the model a cleaner context to reason from, almost like telling it what good code looks like before it starts.
The piece I still haven't solved: even with perfect guardrails per file, models frequently lose track of cross-file invariants. You can have every individual component lint-clean and still end up with a codebase that silently breaks when components interact. That seems like the next layer of the problem.
newzino|12 days ago
[deleted]
takeaura25|12 days ago
Narretz|12 days ago
dcre|12 days ago
TheAlexLichter|11 days ago
austin-cheney|12 days ago
Even after consideration of measurements radical performance improvements are most typically the result of the code's organization and techniques employed than the language its written in. But, of course, that cannot be validated without evidence from comparison of measurements.
The tragic part of all this is that everybody already knows this, but most front end developers do not measure things and may become hostile when measurements do occur that contradict their favorite techniques.
codingdave|12 days ago
Unless of course you are not showing them improvements and are instead just shitting on their work. Yes, people do get hostile to that approach.
insin|12 days ago
I've used this setup for my last few projects and it's so painless, and with recent versions of Node.js which can strip TypeScript types I don't even need a build step for the server code.
Edit: oops, I didn't see nkzw-tech/fate-template, which has something like this, but running client and server separately instead
Exoristos|12 days ago
agos|12 days ago
squidsoup|12 days ago
philipwhiuk|12 days ago
(I opened an issue against typescript-go to flag this https://github.com/microsoft/typescript-go/issues/2825 )
sunaookami|12 days ago
sibeliuss|12 days ago
nimonian|12 days ago
bingobongodev|12 days ago
sublinear|12 days ago
In my experience, the bottleneck has always been backend dev and testing.
I was hoping "tooling" meant faster testing, not yet another layer of frontend dev. Frontend dev is pretty fast even when done completely by hand for the last decade or so. I have and have also seen others livecode on 15 minute calls with stakeholders or QA to mock some UI or debug. I've seen people deliver the final results from that meeting just a couple of hours later. I say this as in, that's what's going to prod minus some very specific edge case bugs that might even get argued away and never fixed.
Not trying to be defensive of pure human coding skills, but sometimes I wonder if we've rolled back expectations in the past few years. All this recent stuff seems even more complicated and more error prone, and frontend is already those things.
whstl|12 days ago
the_harpia_io|12 days ago
i get why the rust/go tools exist - the perf gains are measurable. but the cognitive overhead is real. new engineer joins, they now need 3 different mental models just to make a PR. not sure AI helps here either honestly, it just makes it easier to copy-paste configs you don't fully understand
unknown|12 days ago
[deleted]
e10jc|12 days ago
huksley|12 days ago
fullstackchris|12 days ago
steveklabnik|12 days ago
_pdp_|12 days ago
dejli|12 days ago
vivzkestrel|12 days ago
loevborg|12 days ago
elxr|12 days ago
1necornbuilder|12 days ago
[deleted]