Some of this wording confuses me and should probably be reworked:
> Snowpack is a O(1) build system… Every file goes through a linear input -> build -> output build pipeline
Seems like O(n) to me?
> Snowpack starts up in less than 50ms. That’s no typo: 50 milliseconds or less. On your very first page load, Snowpack builds your first requested files and then caches them for future use
So you can open a socket in 50ms? Seems disingenuous to imply anything takes 50ms when really you’re just waiting until the first request to do anything.
I watched the talk linked by Ives, the guy behind Codesandbox.
So the exact quote in that talk, is:
"... and if you take the existing module map (file), it essentially replaces (function) #1 with this (new) one. And this is how bundlers like Metro work, for example -- They regenerate the file, and over a websocket connection, they send the new file and say "replace function #1 with this one, and execute it."
He then touches on how bundlers do a lot more than just transformation in this step, other processes like chunking, tree-shaking, code-splitting, etc. And this makes it incredibly difficult to properly cache a file, because you can't really get a good heuristic for diffing/patching.
And then the quote of the hour:
"But what if bundling had an O complexity of 1? That would mean that if one file changes, we only transform THAT file and send it to the browser." (no others in dependency tree)
Above quotes start here, and go for about 2 minutes:
In this context, what I believe Ives is aiming for is the idea that's pervasive throughout the talk, which is this "50ms or second HMR time."
If you overlook slight variances between time to update a file which contains more/less content with this system, and you average it out to "50ms, every file, any file", I think that would qualify as O(1) bundling/hot-reloading right?
Maybe I'm misunderstanding, but it seems clear to me: Other bundlers = change one file, `n` files are rebuilt/bundled. Snowpack = change one file, only that one file is rebuilt. Building "from scratch" will necessarily be O(n), but incremental rebuilds can be O(1), no?
>> Snowpack is a O(1) build system… Every file goes through a linear input -> build -> output build pipeline
> Seems like O(n) to me?
Big O notation describes how for a given operation (sorting a list), a given measurement (e.g. items compared, memory used, CPU cycles) grows in relation to a set of variables describing the input to the operation (number of elements).
Here their operation is 'a single-file incremental compilation', they're measuring how many files need recompiling in total, and the only variable n is 'the number of files in the project'.
In that case, Snowpack is O(1), and most other bundlers are O(n). They're not wrong.
You can argue that that's not an interesting point, or that there's other measurements that matter more ofc. It's not wrong though, and imo incremental single-file changes are a thing you care about here, and only ever processing one file in doing so is an interesting distinction between Snowpack and other bundlers.
Big O notation is not (ever) defining any detail of the operation in terms of every possible conflating variable. All Big O comparisons include implicit definitions: the exact same sorting algorithm could be described as both O(nlogn) for item comparisons required per n items in the list, or O(kn^2) for memory copies required per k bytes of largest item & n items, and both are accurate & useful descriptions.
(Sorry to be pedantic, but there's a _lot_ of replies here that have misunderstood big O entirely)
The JS community doesn't give a fuck about using correct technical terms. It's like listening to a 5yo child trying to use adult words, you have to know to interpret them differently.
"Isomorphic Javascript" ~ Same code running in browser and server, has nothing to do with isomorphism in the mathematical sense.
"O(1) build system" ~ It's faster than others, has nothing to do with big-O notation.
I just have no idea why someone would think the killer feature of a new new new new new build system for web dev is micro optimising for speed.
The issue with build systems is complexity, poor defaults, poor config systems that often have you reading across the docs for multiple build frameworks they mashed together trying to figure out how to merge config overrides and breaking changes between versions.
Literally the last thing I care about is whether my browser has refreshed fractionally quicker as I switch from my IDE
I believe they are saying O(1) since it only builds the single file that you just changed and will never just build all your files. They also mention that it only builds the files as requested by the browser, so in terms of the compiler it would be O(1), however it would effectly become O(n) the first time you use it if you include all the individual requests the browser makes as a single unit.
Yep, it's very much O(n). Even if you only consider incremental compilation it's O(n), where n is the number of lines in the file being compiled.
That being said, I understand the feeling of it being sort of O(1) from an incremental compilation perspective. For most of the JS build ecosystem, changing a file can result in many, many files getting recompiled due to bundling. So if you ignore that the size of the file isn't constant, it seems kind-of-O(1) for incremental compilation: for any file change in your codebase, only a single file gets recompiled, regardless of the size or dependency structure of the codebase. And as a result it should be much faster than the rest of the JS ecosystem for incremental compilation, since typically individual files don't get to be that large, and other incremental build systems may have to compile many files for a line change.
But yeah, from a CompSci perspective, it's O(n), even for incremental builds: as the number of lines of the file grows, the amount of work grows. And for non-incremental builds it's of course O(n).
> So you can open a socket in 50ms? Seems disingenuous to imply anything takes 50ms when really you’re just waiting until the first request to do anything.
This makes a lot more sense in the context of the rest of the JS ecosystem. Of course, what Snowpack is doing is opening a socket, and opening a socket in 50ms isn't particularly impressive (mostly it's just measuring the overhead of starting a Node process and importing various dependencies). But other JS ecosystem build tools are very slow to start, because they're architected differently than Snowpack: they do full builds of the entire codebase (due to bundling) — or at least typically builds of large swaths of the codebase — and so on startup typically they'll immediately start building because doing it just-in-time is slow, which makes them slow to start. And if they don't start building immediately, the first request they service is typically quite slow. Since Snowpack doesn't bundle files, it's able to only build the files a specific page uses, which is typically much faster than building the entire codebase; as a result, they can do on-demand builds when a specific page is requested instead of relying on precompilation.
The 50ms isn't impressive in terms of "look how fast we opened a socket." It's impressive in terms of "look how quickly you can start loading pages to see results as compared to other systems," because their build system is so fast that they don't need to precompile.
I started using Snowpack just last week, but I'm not even using the dev server or the bundler part. All I really needed was its ability to convert npm packages into single-file ES modules. Once everything is an ES module you can just let the browser load them all, no bundler or dev server needed at all in your dev cycle. The only dev-time conversion needed is the compilation from typescript to JS, which my IDE already does instantly whenever I save. Previously this worked fine for all our own code but not for dependencies, so I'm pretty happy Snowpack was able to solve that problem.
How does it work with shared dependencies? Does each direct npm dependency get bundled with its own copy of each shared dependency, or do those get re-wired to point to a shared module?
Okay, this is really cool but I don't want to "create a snowpack app". I just want a "If you're using webpack + babel and want more speed, do this" thing. With the webpack dev server builds aren't too bad for the size of thing I'm working on.
That’s basically what the rest of the docs are for. I’ve been playing with it recently, and there is a learning curve but probably less than learning webpack from scratch.
I also found it useful to look through the code in create snowpack app, it’s not very dynamic or complex, the config files are written in a simple way and they get copied over or extended by the app that the tool creates for you.
I find this interesting. As a mainly desktop developer now doing web frontend work, the JS ecosystem has been so frustrating.
Bundlers struck me as unnecessary given JS now has native module support, and that is the premise of this project.
Some out of memory issues when bundling certain dependencies, and slow "npm start" times with React, has only strengthened my initial impressions. So again, this could be a welcome impovement.
Shameless plug for those of you who prefer video tutorials to written https://youtu.be/nbwt3A9RzNw It's an intro to Snowpack v1 but it'll still give you a good idea of what Snowpack does and how it differs from Webpack. I would agree that Snowpack isn't quite there for production projects, mostly due to the fact that many projects still don't ship their modiels as ES modules.
It seems like the author is implying there is a flat constant time for compilation which can't be true because it's dependent on the number of changed files.
Sounds interesting. It's a bit unclear for me what the "runs in 15ms" means. I think in my projects, the TypeScript compilation is what takes the longest, so although I use parcel and it's pretty fast, I still have to wait 1-2 seconds for TypeScript to compile changes. If it does not bundle, and still uses all the external transformers (TypeScript, Babel, etc.), what exactly does it do? Does it somehow optimize the execution of those transformers/transpilers?
> I think in my projects, the TypeScript compilation is what takes the longest, so although I use parcel and it's pretty fast, I still have to wait 1-2 seconds for TypeScript to compile changes.
The build result doesn’t need to wait on the results of the type checking. TypeScript or Babel transpiling can happen even if there is a type error.
> If it does not bundle, and still uses all the external transformers (TypeScript, Babel, etc.), what exactly does it do? Does it somehow optimize the execution of those transformers/transpilers?
It skips the bundling step, and does aggressive caching.
Having the browser make one request per npm bundle sounds awful. It’s great if client has fast internet and server is close by, or mostly localhost, but latency will play a far bigger role than the 50ms startup time. That’s not a good metric to look at.
The metric that corresponds to user experience is cold compile + page reload time, incremental compile + page reload time i.e. How long before I press enter on a command and I see something usable in a browser to devloop on.
If you let the browser load the first file, parse and figure out the next file to load, a large project could have 100s of roundtrips. That’s why JS bundlers were created in first place. To avoid the cost of a long critical chain.
Using a device from Africa (Uganda) to connect to US servers, one feels how bad an experience latency can make. More and more development is done on cloud machines or remote host, so this isn’t a rare usecase.
What I do hope for is if there is a new bundler, it can use the webpack plugin ecosystem. It’s massive and anything new has to foster a similar ecosystem of tooling.
Or please just make webpack fast with incremental disk compiles. I would pay money for that.
I just tried it in a @microsoft/rush project of mine.
Added a new project with 1 dependency (which contains a single one-liner function to return a test string). No other dependencies.
Takes about 30s to start. Not sure whether the fact that my dependency is a link with many siblings due to rush and pnpm is an issue, but it is a far cry from 50ms.
Also I did not get it to reliably pick up when the dependency has changed (cache invalidation most likely has an incompatible strategy with `npm link`/`pnpm`.
Snowpack in principle looks nice, but I think I need something else
Is anyone using this in combination with a plain ole' server rendered app? All the examples seem to build on a SPA example where you have a single index.js entrypoint for your entire app. What about a Rails/Django project where each page loads a few scripts it needs?
That usecase has been stuck with the "global jQuery plugins" approach for ages and it feels like <script type="module"> + something like Snowpack would really improve it.
Is it just me, or is the build time pretty much never an issue? Usually when I develop stuff builds/recompiles faster than I can switch to my browser to try it out.
How is this such a big problem for people that people need to write yet another build tool, instead of improving the one everyone already use?
I promise you this is a very real problem, every company I’ve worked at with even a moderate sized codebase has had to battle webpack at various points and try to hack in various types of only semi functional 3rd party caching tools and such to make development more manageable.
If you’re a solo dev working on mostly new codebases I imagine it’s not a problem for you though.
I guess Im different to most JS developers, because I prefer to work with HMR off about 95% of the time. Its good for UI prototyping (which I dont do much tbf), but it tends to get in my way when doing anything else. Maybe in total it makes me loose a minute or two but thats not an issue.
This is interesting, what's the upside of working without HMR? There are changes where HMR fails to figure things out and you have to hard reload, but other than those it has served me very well otherwise. Interested in hearing the other side of the story, if there is one.
In my experience using webpack, once you’ve configured incremental builds, the only slow part is TypeScript type checking. That’s solved by doing it async and having the dev build be compile only. Even a huge project builds after a single file change faster than you can notice.
[+] [-] orf|5 years ago|reply
> Snowpack is a O(1) build system… Every file goes through a linear input -> build -> output build pipeline
Seems like O(n) to me?
> Snowpack starts up in less than 50ms. That’s no typo: 50 milliseconds or less. On your very first page load, Snowpack builds your first requested files and then caches them for future use
So you can open a socket in 50ms? Seems disingenuous to imply anything takes 50ms when really you’re just waiting until the first request to do anything.
Looks like an interesting project though.
[+] [-] gavinray|5 years ago|reply
So the exact quote in that talk, is:
"... and if you take the existing module map (file), it essentially replaces (function) #1 with this (new) one. And this is how bundlers like Metro work, for example -- They regenerate the file, and over a websocket connection, they send the new file and say "replace function #1 with this one, and execute it."
He then touches on how bundlers do a lot more than just transformation in this step, other processes like chunking, tree-shaking, code-splitting, etc. And this makes it incredibly difficult to properly cache a file, because you can't really get a good heuristic for diffing/patching.
And then the quote of the hour:
"But what if bundling had an O complexity of 1? That would mean that if one file changes, we only transform THAT file and send it to the browser." (no others in dependency tree)
Above quotes start here, and go for about 2 minutes:
https://youtu.be/Yu9zcJJ4Uz0?t=1018
----
In this context, what I believe Ives is aiming for is the idea that's pervasive throughout the talk, which is this "50ms or second HMR time."
If you overlook slight variances between time to update a file which contains more/less content with this system, and you average it out to "50ms, every file, any file", I think that would qualify as O(1) bundling/hot-reloading right?
[+] [-] dandelany|5 years ago|reply
[+] [-] pimterry|5 years ago|reply
> Seems like O(n) to me?
Big O notation describes how for a given operation (sorting a list), a given measurement (e.g. items compared, memory used, CPU cycles) grows in relation to a set of variables describing the input to the operation (number of elements).
Here their operation is 'a single-file incremental compilation', they're measuring how many files need recompiling in total, and the only variable n is 'the number of files in the project'.
In that case, Snowpack is O(1), and most other bundlers are O(n). They're not wrong.
You can argue that that's not an interesting point, or that there's other measurements that matter more ofc. It's not wrong though, and imo incremental single-file changes are a thing you care about here, and only ever processing one file in doing so is an interesting distinction between Snowpack and other bundlers.
Big O notation is not (ever) defining any detail of the operation in terms of every possible conflating variable. All Big O comparisons include implicit definitions: the exact same sorting algorithm could be described as both O(nlogn) for item comparisons required per n items in the list, or O(kn^2) for memory copies required per k bytes of largest item & n items, and both are accurate & useful descriptions.
(Sorry to be pedantic, but there's a _lot_ of replies here that have misunderstood big O entirely)
[+] [-] all-fakes|5 years ago|reply
> Some bundlers may even have O(n^2) complexity: as your project grows, your dev environment gets exponentially slower
that's clearly not exponential!
[+] [-] wereHamster|5 years ago|reply
"Isomorphic Javascript" ~ Same code running in browser and server, has nothing to do with isomorphism in the mathematical sense.
"O(1) build system" ~ It's faster than others, has nothing to do with big-O notation.
[+] [-] weego|5 years ago|reply
The issue with build systems is complexity, poor defaults, poor config systems that often have you reading across the docs for multiple build frameworks they mashed together trying to figure out how to merge config overrides and breaking changes between versions.
Literally the last thing I care about is whether my browser has refreshed fractionally quicker as I switch from my IDE
[+] [-] nicoburns|5 years ago|reply
[+] [-] cammil|5 years ago|reply
[+] [-] Humphrey|5 years ago|reply
[+] [-] doovd|5 years ago|reply
[+] [-] reissbaker|5 years ago|reply
Yep, it's very much O(n). Even if you only consider incremental compilation it's O(n), where n is the number of lines in the file being compiled.
That being said, I understand the feeling of it being sort of O(1) from an incremental compilation perspective. For most of the JS build ecosystem, changing a file can result in many, many files getting recompiled due to bundling. So if you ignore that the size of the file isn't constant, it seems kind-of-O(1) for incremental compilation: for any file change in your codebase, only a single file gets recompiled, regardless of the size or dependency structure of the codebase. And as a result it should be much faster than the rest of the JS ecosystem for incremental compilation, since typically individual files don't get to be that large, and other incremental build systems may have to compile many files for a line change.
But yeah, from a CompSci perspective, it's O(n), even for incremental builds: as the number of lines of the file grows, the amount of work grows. And for non-incremental builds it's of course O(n).
> So you can open a socket in 50ms? Seems disingenuous to imply anything takes 50ms when really you’re just waiting until the first request to do anything.
This makes a lot more sense in the context of the rest of the JS ecosystem. Of course, what Snowpack is doing is opening a socket, and opening a socket in 50ms isn't particularly impressive (mostly it's just measuring the overhead of starting a Node process and importing various dependencies). But other JS ecosystem build tools are very slow to start, because they're architected differently than Snowpack: they do full builds of the entire codebase (due to bundling) — or at least typically builds of large swaths of the codebase — and so on startup typically they'll immediately start building because doing it just-in-time is slow, which makes them slow to start. And if they don't start building immediately, the first request they service is typically quite slow. Since Snowpack doesn't bundle files, it's able to only build the files a specific page uses, which is typically much faster than building the entire codebase; as a result, they can do on-demand builds when a specific page is requested instead of relying on precompilation.
The 50ms isn't impressive in terms of "look how fast we opened a socket." It's impressive in terms of "look how quickly you can start loading pages to see results as compared to other systems," because their build system is so fast that they don't need to precompile.
[+] [-] aphextron|5 years ago|reply
O(n) effectively approximates O(1) with sufficiently low values of n
[+] [-] elpool2|5 years ago|reply
[+] [-] gavinray|5 years ago|reply
I wasn't aware of this, that's actually a pretty cool feature and incredibly useful.
A bit unlearned on ESM modules, how are they different from the isomorphic browser/Node single-file bundles produced by Webpack/Rollup?
[+] [-] _bxg1|5 years ago|reply
[+] [-] renewiltord|5 years ago|reply
[+] [-] cactus2093|5 years ago|reply
I also found it useful to look through the code in create snowpack app, it’s not very dynamic or complex, the config files are written in a simple way and they get copied over or extended by the app that the tool creates for you.
[+] [-] k__|5 years ago|reply
It's basically a tool that allows you to develop without bundling, but it still bundles for production via Parcel.
So it's not a Webpack/Parcel/Rollup killer.
[+] [-] flanbiscuit|5 years ago|reply
[+] [-] 0az|5 years ago|reply
[+] [-] orra|5 years ago|reply
Bundlers struck me as unnecessary given JS now has native module support, and that is the premise of this project.
Some out of memory issues when bundling certain dependencies, and slow "npm start" times with React, has only strengthened my initial impressions. So again, this could be a welcome impovement.
[+] [-] atrilumen|5 years ago|reply
But yeah, JS is Crazy Town. It can be very frustrating.
( Be wary of dependencies. )
[+] [-] mavsman|5 years ago|reply
[+] [-] stefan_|5 years ago|reply
[+] [-] adtac|5 years ago|reply
>Some bundlers may even have O(n^2) complexity: as your project grows, your dev environment gets exponentially slower
They seem to not understand the difference between exponential and quadratic either. This is appalling.
[+] [-] ascotan|5 years ago|reply
[+] [-] burlesona|5 years ago|reply
[+] [-] julius|5 years ago|reply
Development: Creates many ESM-Files. Firefox/Chrome can load them.
Production: Bundles&Minimizes these ESM-Files.
One Question: There is a JS-Error, only occuring in IE11. "t._x is undefined". How do I debug that?
[+] [-] XCSme|5 years ago|reply
[+] [-] genuine_smiles|5 years ago|reply
The build result doesn’t need to wait on the results of the type checking. TypeScript or Babel transpiling can happen even if there is a type error.
> If it does not bundle, and still uses all the external transformers (TypeScript, Babel, etc.), what exactly does it do? Does it somehow optimize the execution of those transformers/transpilers?
It skips the bundling step, and does aggressive caching.
[+] [-] nojvek|5 years ago|reply
The metric that corresponds to user experience is cold compile + page reload time, incremental compile + page reload time i.e. How long before I press enter on a command and I see something usable in a browser to devloop on.
If you let the browser load the first file, parse and figure out the next file to load, a large project could have 100s of roundtrips. That’s why JS bundlers were created in first place. To avoid the cost of a long critical chain.
Using a device from Africa (Uganda) to connect to US servers, one feels how bad an experience latency can make. More and more development is done on cloud machines or remote host, so this isn’t a rare usecase.
What I do hope for is if there is a new bundler, it can use the webpack plugin ecosystem. It’s massive and anything new has to foster a similar ecosystem of tooling.
Or please just make webpack fast with incremental disk compiles. I would pay money for that.
[+] [-] matthewhartmans|5 years ago|reply
[+] [-] mgoetzke|5 years ago|reply
Added a new project with 1 dependency (which contains a single one-liner function to return a test string). No other dependencies.
Takes about 30s to start. Not sure whether the fact that my dependency is a link with many siblings due to rush and pnpm is an issue, but it is a far cry from 50ms.
Also I did not get it to reliably pick up when the dependency has changed (cache invalidation most likely has an incompatible strategy with `npm link`/`pnpm`.
Snowpack in principle looks nice, but I think I need something else
[+] [-] dang|5 years ago|reply
[+] [-] MatekCopatek|5 years ago|reply
That usecase has been stuck with the "global jQuery plugins" approach for ages and it feels like <script type="module"> + something like Snowpack would really improve it.
[+] [-] ecmascript|5 years ago|reply
How is this such a big problem for people that people need to write yet another build tool, instead of improving the one everyone already use?
[+] [-] cactus2093|5 years ago|reply
If you’re a solo dev working on mostly new codebases I imagine it’s not a problem for you though.
[+] [-] dreen|5 years ago|reply
[+] [-] Etheryte|5 years ago|reply
[+] [-] koolba|5 years ago|reply
[+] [-] john_miller|5 years ago|reply
[+] [-] bdefore|5 years ago|reply
[+] [-] it|5 years ago|reply
I do wonder though if it would be enough to turn on CloudFlare's minification for prod.