Micro-libraries are really good actually, they're highly modular, self-contained code, often making it really easy to understand what's going on.
Another advantage is that because they're so minimal and self-contained, they're often "completed", because they achieved what they set out to do. So there's no need to continually patch it for security updates, or at least you need to do it less often, and it's less likely that you'll be dealing with breaking changes.
The UNIX philosophy is also build on the idea of small programs, just like micro-libraries, of doing one thing and one thing well, and composing those things to make larger things.
I would argue the problem is how dependencies in general are added to projects, which the blog author pointed out with left-pad. Copy-paste works, but I would argue the best way is to fork the libraries and add submodules to your project. Then if you want to pull a new version of the library, you can update the fork and review the changes. It's an explicit approach to managing it that can prevent a lot of pitfalls like malicious actors, breaking changes leading to bugs, etc.
Micro-libraries anywhere else are everything you said: building blocks that come after a little study of the language and its stdlib and will speed up development of non-trivial programs.
In JS and NPM they are a plague, because they promise to be a substitute for competence in basic programming theory, competence in JS, gaps and bad APIs inside JS, and de-facto standards in the programming community like the oldest operating functions in libc.
There are a lot of ways for padding a number in JS and a decent dev would keep an own utility library or hell a function to copy-paste for that. But no. npm users are taught to fire and forget, and update everything, no concept of vendoring (that would have made incidents like left-pad, faker and colors less maddening, while vendoring is even bolt in npm and it's very good!). They for years copy-pasted in the wrong window, really, they should copypaste blocks of code and not npm commands. And God helps you if you type out your npm commands because bad actors have bought the trend and made millions of libraries with a hundred different scams waiting for fat fingers.
By understanding that JS in the backend is optimizing for reducing cost whatever the price, becoming Smalltalk for the browser and for PHP devs, you would expect some kind of standard to emerge for having a single way to do routine stuff. Instead in JS-world you get TypeScript, and in a future maybe WASM. JS is just doomed. Like, we are doomed if JS isn't, to be honest.
The UNIX philosophy is being a bit abused for this argument. Most systems that fall under the UNIX category are more or less like a large batteries-included standard library: lots of little composable units that ship together. UNIX in practice is not about getting a bare system and randomly downloading things from a bunch of disjointed places like tee and cat and head and so on, and then gluing them together and perpetually having to keep them updated independently.
> So there's no need to continually patch it for security updates, or at least you need to do it less often, and it's less likely that you'll be dealing with breaking changes.
Regardless of how supposedly good or small is the library, the frequency at which you need to check for updates is the same. It doesn’t have anything to do with the perceived or original quality of the code. Every 3rd party library has at least the dependency on platform and platforms are big, they have vulnerabilities and introduce breaking changes. Then there’s a question of trust and consistency of your delivery process. You won’t adapt your routines based on specifics of every tiny piece of 3rd party code, so you probably check for updates regularly and for everything at once. Then their size is no longer an advantage.
> Copy-paste works, but I would argue the best way is to fork the libraries and add submodules to your project. Then if you want to pull a new version of the library, you can update the fork and review the changes.
This sounds “theoretical” and is not going to work at scale. You cannot seriously expect application level developers to understand low level details of every dependency they want to use. For a meaningful code review of merges they must be domain experts, otherwise effectiveness of such approach will be very low - they will inevitably have to trust the authors and just merge without going into details.
If these libraries are so small, self-contained and "completed", why not just copy-paste these functions?
Submodules can work too, but do you really need these extra lines in your build scripts, extra files and directories, and the import lines just for a five line function? Copy-pasting is much simpler, with maybe a comment referring to the original source.
Note: there may be some legal reasons for keeping "micro-libraries" separate, or for not using them at all though but IANAL as they say.
">The UNIX philosophy is also build on the idea of small programs, just like micro-libraries, of doing one thing and one thing well, and composing those things to make larger things."
The Unix philosophy is also built on willful neglect of systems thinking. The complexity of system isn't in the complexity of its parts but in the complexity of the interaction of its parts.
Putting ten micro-libraries together, even if each is simple, doesn't mean you have a simple program, in fact it doesn't even mean you have a working program, because that depends entirely on how your libraries play together. When you implement the content of micro-libraries yourself you have to be at the very least conscious not just of what, but how your code works, and that's a good first defense against putting parts together that don't fit.
> The UNIX philosophy is also build on the idea of small programs, just like micro-libraries, of doing one thing and one thing well, and composing those things to make larger things.
They have small programs, but that are not of different project. For example all the basic Linux utilities are developed and distributed as part of the GNU coreutils package.
It's the same of having a modular library, with multiple functions in them, that you can choose from. In fact the problem is that these function like isNumber shouldn't even be libraries, but should be in the language standard library itself.
> I would argue the problem is how dependencies in general are added to projects
But you need the functionality anyway, so there are two dependencies: on your own code, or on someone else's code. But you can't avoid a dependency, and it comes at a cost.
If you don't know how to code the functionality, or it will take too much time, a library is an outcome. But if you need leftPad or isNumber as an external dependency, that's so far in the other direction, it's practically a sign of incompentence.
The UNIX philosophy is also build on the idea of small programs, just like micro-libraries, of doing one thing and one thing well, and composing those things to make larger things.
This year I started learning FORTH, and it's very much this philosophy. To build a building, you don't start with a three-story slab of marble. You start with a hundreds of perfect little bricks, and fit them together.
If you come from a technical ecosystem outside the Unix paradigm, it can be hard to grasp.
Right! So if it is indeed so easy to understand what is going on, why would you need to make it an external dependency that can update itself behind your back?
If you understand what is going on, paste it into your tree.
> Micro-libraries are really good actually, they're highly modular, self-contained code
Well I think that is the point, they're not self-contained. You are adding mystery stuff and who knows how deep the chain of dependencies go. See the left-pad fiasco that broke so much stuff, because the chain of transitive dependencies ran deep and wide.
NPM is a dumpster fire in this regard. I try to avoid it - is there a flag you can set to say "no downstream dependencies" or something when you add a dependency? At least that way you can be sure things really are self-contained.
Do you know what else is all of that? Writing the five lines of code by hand. Or just letting a LLM generate it. This and everything else I want to reply has already been covered in the article.
Micro libraries are ok - TFA even says you can use self-contained blocks as direct source.
Mirco dependencies are a god damn nuisance, especially with all the transitive micro-dependencies that come along, often with different versions, alternative implementations, etc.
Basically, enforce that all libraries have lock files and when you install a dependency use the exact versions it shipped with.
Edit: Can someone clarify why this doesn't work? Wouldn't it make installing node packages work the same way as it does in python, ruby, and other languages?
Why even stop at micro-libraries? Instead of "return num - num === 0" why not create the concept of pico-libraries people can use like "return isNumberSubtractedFromItselfZero(num)" ? It's basically plain English right?
You could say that if all the popular web frameworks in use today were rewritten to import and use hundreds of thousands of pico-libraries, their codebase would be, as you say, composed of many high modular, self contained pieces that are easy to understand.
The primary cause of the left-pad incident was that left-pad was removed from the npm registry. Many libraries depended on left-pad. The same could have occurred with any popular library, whether micro or not.
To reformulate the statement made in the intro of this post: "maybe it’s not a great idea to outsource _any critical_ functionality to random people on the internet."
It has long been a standard, best practice in software engineering to ensure dependencies are stored in and made available from first-party sources. For example, this could mean maintaining an internal registry mirror that permanently stores any dependencies that are fetched. It could also be done by vendoring dependencies. The main point is to take proactive steps to ensure your dependencies will always be there when you need them, and to not blindly trust a third-party to always be there to give your dependencies to you.
> To reformulate the statement made in the intro of this post: "maybe it’s not a great idea to outsource _any critical_ functionality to random people on the internet."
Well everything is critical in the sense that a syntax error could break many builds and CI systems.
This is what lock files are for. If used properly, and the registry is available, there are no massive issues. This is how things are supposed work – all the tooling is made this way.
In short, I think the lessons from the leftpad debacle are (1) people don’t use existing versioning tooling, (2) there is a surprising amount of vendors involved if you look at dep trees for completely normal functionality and (3) the JS ecosystem is particularly fragmented with poor API discipline and non-existent stdlib.
EDIT: Just read up on it again and I misremembered. The author removed leftpad from NPM due to a dispute with the company regarding an unrelated package. That’s more of a mismanaged registry situation. You can’t mutate and remove published code without breaking things. Thus NPM wasn’t a good steward of their registry. If there’s a need to unpublish or mutate anything, there needs to be leeway and a path to migrate.
Micro libraries are worse than no libraries at all - but I maintain they are still better than gargantuan "frameworks" or everything-but-the-kitching-sink "util"/"commons" packages, where you end up only using a tiny fraction of the functionality but have to deal with the maintenance cost and attack surface of the whole thing.
If you're particularly unlucky, the unused functionality pulls in transitive dependencies of its own - and you end up with libraries in your dependency tree that your code is literally not using at all.
If you're even more unlucky, those "dead code" libraries will install their own event handlers or timers during load or will be picked up by some framework autodiscovery mechanism - and will actually execute some code at runtime, just not any code that provides anything useful to the project. I think an apt name for this would be "undead code". (The examples I have seem were from java frameworks like Spring and from webapps with too many autowired request filters, so I do hope that is no such an issue in JS yet)
> but I maintain they are still better than gargantuan "frameworks" or everything-but-the-kitching-sink "util"/"commons" packages, where you end up only using a tiny fraction of the functionality but have to deal with the maintenance cost and attack surface of the whole thing.
Indeed. Several toy projects I've done were blown up in size by four orders of magnitude because of Numpy.
I only want multi-dimensional arrays that support reshaping and basic element-wise arithmetic, maybe matrix multiplication; I'm not even that concerned about performance.
But I have to pay for countless numerical algorithms I've never even heard of provided by decades-old C and/or FORTRAN projects, plus even more higher-math concepts implemented in Python, Numpy's extensive (and fragmented - there's even compiled code for testing that's outside of any test folders) test suite that I'll never run myself, a bunch of backwards-compatibility hacks completely irrelevant to my use case, a python-to-fortran interface wrapper generator, a vendored copy of distutils even in the wheel, over 3MiB of .so files for random number generators, a bunch of C header files...
[Edit: ... and if I distribute an application, my users have to pay for all of that, too. They won't use those pieces either; and the likelihood that they can install my application into a venv that already includes NumPy is pretty low.]
I know it's fashionable to complain about dependency hell, but modularity really is a good thing. By my estimates, the total bandwidth used daily to download copies of NumPy from PyPI is on par with that used to stream the Baby Shark video from YouTube - assuming it's always viewed in 1080p. (Sources: yt-dlp info for file size; History for the Wikipedia article on most popular YouTube videos; pypistats.org for package download counts; the wheel I downloaded.)
Sometimes importing zombie "undead code" libraries can be beneficial!
I just refactored a bunch of python computer vision code that used detectron2 and yolo (both of which indirectly use OpenCV and PyTorch and lots of other stuff), and in the process of cleaning up unused code, I threw out the old imports of the yolo modules that we weren't using any more.
The yololess refactored code, which really didn't have any changes that should measurably affect the speed, ran a mortifying 10% slower, and I could not for the life of me figure out why!
Benchmarking and comparing each version showed that the yololess version was spending a huge amount of time with multiple threads fighting over locks, which the yoloful code wasn't doing.
But I hadn't changed anything relating to threads or locks in the refactoring -- I had just rearranged a few of the deck chairs on the Titanic and removed the unused yolo import, which seemed like a perfectly safe innocuous thing to do.
Finally after questioning all of my implicit assumptions and running some really fundamental sanity checks and reality tests, I discovered that the 10% slow-down in detectron2 was caused by NOT importing the yolo module that we were not actually using.
So I went over the yolo code I was originally importing line by line, and finally ran across a helpfully commented top-level call to fix an obscure performance problem:
cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
Even though we weren't actually using yolo, just importing it, executing that one line of code fixed a terrible multithreading performance problem with OpenCV and PyTorch DataLoader fighting behind the scenes over locks, even if you never called yolo itself.
So I copied that magical incantation into my own detectron2 initialization function (not as top level code that got executed on import of course), wrote some triumphantly snarky comments to explain why I was doing that, and the performance problems went away!
The regression wasn't yolo's or detectron2's fault per se, just an obscure invisible interaction of other modules they were both using, but yolo shouldn't have been doing anything globally systemic like that immediately when you import it without actually initializing it.
But then I would have never discovered a simple way to speed up detectron2 by 10%!
So if you're using detectron2 without also importing yolo, make sure you set the number of cv2 threads to zero or you'll be wasting a lot of money.
Seems a lot like the classic "I put only a couple of the strong advantages and enumerate everything I could think about as disadvantage". While I'm bias (I've done a bunch of these micro-libraries myself), there's more reasons I/OSS devs do them! To name other advantages (as a dev consuming them):
- Documentation: they are usually well documented, at least a lot better than your average internal piece of code.
- Portability: you learn it once and can use it in many projects, a lot easier than potentially copy/pasting a bunch of files from project to project (I used to do that and ugh what a nightmare it became!).
- Semi-standard: everyone in the team is on the same page about how something works. This works on top of the previous two TBF, but is distinct as well e.g. if you use Axios, 50% of front-end devs will already know how to use it (edit: removed express since it's arguably not micro though).
- Plugins: now with a single "source" other parties or yourself can also write plugins that will work well together. You don't need to do it all yourself.
- Bugs! When there are bugs, now you have two distinct "entities" that have strong motivation to fix the bugs: you+your company, and the dev/company supporting the project. Linus's eyeballs and all (yes, this has a negative side, but those are also covered in the cons in the article already!).
- Bugs 2: when you happen upon a bug, a 3rd party might've already found a bug and fixed it or offered an alternative solution! In fact I just did that today [1]
That said, I do have some projects where I explicitly recommend to copy/paste the code straight into your project, e.g. https://www.npmjs.com/package/nocolor (you can still install it though).
Every team should eventually have some internal libraries of useful project-agnostic functionality. That addresses most of your points.
Copy-paste the code into your internal library and maintain it yourself. Don't add a dependency on { "assert": "2.1.0" }. It probably doesn't do what you actually want, anyway.
I think the more interesting point is that most projects don't know what they actually need and the code is disposable. In that scenario micro-libraries make some amount of sense. Just import random code and see how far you can get.
Yes, everyone seems to take the wrong lesson from left-pad. The reason left-pad happened on NPM isn't that there's something uniquely wrong with how NPM was built, but that JS has a uniquely barren standard library. People aren't writing their own left-pad functions in Java or Go or Python, it's just in the stdlib.
I was about to jump into the comment section and say something along the lines of "but no one really thinks they're actually good, right?", only to see the top comment arguing they're good.
I fail to comprehend how a single-function-library called "isNumber" even needs updating, much less "fairly frequently".
The debate around third-party code vs. self-developed is eternal. IMHO if you think you can do better than existing solutions for your use-case, then self-developed is the obvious choice. If you don't, then use third-party. This of course says a lot about those who need to rely on trivial libraries.
>I fail to comprehend how a single-function-library called "isNumber" even needs updating, much less "fairly frequently".
If someone uses isNumber as a fundamental building block and surrogate for Elm or Typescript (a transpiler intermediate that would treat number more soundly I hope), this poor soul whom I deeply pity will encounter a lot of strange edge-cases (like that one stated in the article: NaN is a number or not?) and if they fear the burden of forking the library they will try to inflict this burden upstream, enabling feature or conf bloat.
I insinuate that installation of isNumber is, like most of these basic microlibs, a symptom of incompetence in usage of the language. A worn JS dev would try isNaN(parseInt(num+'')) and sometime succeed.
Until I read the comments here, I thought from the title that this was about those small neighborhood "libraries" that are basically a box the size of a very large birdhouse, mounted on a post, with a bunch of donated books inside that passersby are free to borrow. I was really wondering why someone would have a problem with these, unless they work for a book publisher.
Micro libraries are fine (well... not really), the problem starts when each of those depends on 10 more "micro" libraries and so forth. The branching factor quickly leads to bloat. Libraries have a duty to minimize their footprint in ways that applications do not.
I'm not taking an absolute position either way -- the devil is in the details -- but here's my steelman for the opposing view:
When you put something in the standard library, it's harder to take it out, meaning that you're committing development resources to support the implementation. Furthermore things change: protocols and formats rise and fall in popularity and programming style evolves as the language changes (e.g. callbacks vs. promises in JS).
Therefore the stdlib becomes where libraries go to die, and you'll always have a set of third party libraries that are "pseudo-standard", like NumPy in Python.
Having a minimal stdlib lets you "free-market" the decision, letting the community effects take care of what is considered standard in the ecosystem, and lets you optimize its minimal surface, like what happened with C.
Why isn't there some kind of 'compiler/linker/stripper' that would collect the functions actually used and compile them into an application specific library? Yes I know that dynamic dispatch makes that difficult but the programmer does surely know which functions he wants to call.
I sometimes hanker for a return to Fortran IV where every routine was separately compiled and the linker only put into the object code those that were referred to by something else.
There are many ways to look at this. Maybe the standard lib of such functions should be implemented as native functions in the language (1). Or as a standard external function library (2). Or people should copy-paste the functions they need and keep in their organization's function library (3). I am sure there are a few more options.
I moved to option 3: in all my apps I include a function library that I build over the years, so I don't start from scratch every time. I deeply hate ("hate speech" example here) dependencies to libraries from all over the Internet due to security reasons, but I copy-paste code when needed to my library after I read, understand and check the code that I copy. The biggest advantage is that some of this code is better than what I could invent from scratch in a busy day and I save the time of doing it right. The disadvantage is there is no way to reward these authors that contribute to humankind.
PS. My function library has functions mostly written by me, over 80%, but it includes code written by others. In my case, every time I need a function I check my existing library first, then analyze whether to write or copy.
I would argue that "They should either be copy-pasted into your codebase" would cause more code liabilities and maintenance required further down the line. I've personally seen codebases with a ton of custom code, copy-pasted code, inspired implementations before and it was horrible to get them up to speed with the latest functionality / best practices. I agree that having too many micro-libraries might not be beneficial though, but perhaps look for larger, more well-established libraries that encompasses those functionalities :)
What‘s an example for „latest functionality“ or „best practices“ that should or could possibly change for a function like leftPad and that would not automatically happen by virtue of being in the code base (such as formatting)?
You don't even have to talk about hypotheticals when it comes to this "vendor everything instead" philosophy. This is basically how the C world works, a lot of which is driven by a general allergy to dependencies by embedded developers - partially out of necessity (space and overhead are MUCH more important constraints in embedded land) - but also partially out of cargo culting.
I guess the opinion I'll share here is that I don't hear too many people arguing that the way embedded developers manage C libraries is at the forefront of how we should be handling and distributing code.
> One breaking change simply upgraded the minimum supported Node version from 0.10.0 to 0.12.0 and changed nothing else.
Well, that's a proper use of SemVer, not sure why you put it against the library's author. I've personally been burned enough times by libraries that for some reason think that literally being unable to compile them is somehow a backwards-compatible change, so it's refreshing to see that some people actually understand that.
There's nothing especially wrong with small libraries if you carefully manage them and don't allow for supply chain attacks. I don't think updates are a serious concern compared to not using a library, because your own code could easily have vulnerabilities too. It is harder to update lots of small libraries versus one big library, but you pick your battle.
In my Laravel projects, there are a few packages of much more niche/hobbyist origin without corporate backing, some haven’t been updated for a while, and others are perfectly fine and don’t need much maintenance.
Normally, packages are listed in my composer.json and stored in vendor/. For those packages, I created a separate folder called vendor_private/ which is part of my Git tree, put copies of these weird little packages in it, and set up my composer.json to consider that folder a repository.
Works like a charm. My big important packages are still upstream. I can customize the little ones as needed to fit better, or have better code, and not worry about them going unmaintained. It’s also way quicker than copying the files individually out of the package and into the right places (along with updating Namespaces, configuration, etc.) Once in a while, I’ll go back and see if anything worthwhile has changed upstream - and so far, it never has.
Personally I prefer sharing one level up from function to conceptual module, ie. instead of "left-pad" function, "string" module, ie. a bit like this [0] (`${authorshipDisclaimer}`).
I'm also an advocate, against crowd, of qualified imports as they help with refactoring (renames are propagated, especially in monorepos), readability/reviews (functions are qualified, you know where they're coming from) and overall coding experience – qualified module name followed by dot gives good autocompletion, imports look neat in larger projects etc. The codebase written like this resembles extended standard library. It also helps with solving problems by encouraging first principle thinking, bottom up coding that produces auditable codebase with shallow external dependencies etc.
Using SNS as an example when it's neither micro nor a library but a service (and a huge abstraction over native push notifications, whereas most micro-libraries provide simple utilities that aren't very abstract), saying that complex libraries are harder to audit and hence a security risk (which should be a point in favor of micro-libraries that are small enough to audit in minutes), saying libraries might have large footprints (which is surely another reason to go for micro-libraries over all-you-could-possibly-need-libraries), saying transitive dependencies are bad, (yet again, this points towards an advantage of micro-libraries, which are less likely to have many dependencies), ... I don't know.
I think the JS library ecosystem is a debacle, but there's really only one point in this post that grabbed me:
"Would future updates be useful? No. The library is so simple that any change to the logic would be breaking, and it is already clear that there are no bugs."
Maybe what you want is a library ecosystem where things can be marked "this will never change". Something crazy happens and you actually need to update "is-number"? Rename it.
Of course, you can simulate that with a single large omnibus dependency that everyone can trust that pulls all these silly micro-libraries in verbatim.
And that is I think the value of micro libs, at least in JS, you don't want to think about all the edge cases when you only want to check if something is a Number.
This library is a hilarious example of a huge problem with this kind of package. "Number" is in the eye of the beholder. A string containing numeric characters is, in my view, in no useful way "a number". A package that treats it as such just perpetuates weakly-typed nonsense.*
But the broader point is, you can't outsource understanding to a package. There will be places in your code where NaN is a perfectly valid number, or Infinity. And other places where you absolutely need to be sure neither of the above make their way in.
By pretending that a package can capture the universal essence of "numberless", and that this will broadly apply across the entire JS ecosystem (see reported benefits like "different libraries can all rely on is-number instead of rewriting duplicated helper functions!") is naive.
I wrote more about this in a post linked in a top level comment. The is-promise library is another great example.
* Personal pet theory is that the package author would have been embarrassed to publish a 1-line package, so included "numeric strings are numbers" as a fig leaf to justify the package's existence. They should have instead created two new packages, is-actual-number and is-numeric-string, so the implementation of is-number could be nice and clean:
Doesn't that exclude NaN (which you probably want despite the name)? I think this really highlights that you probably do want to think about those edge cases...
In any case this is a bad example because Typescript exists.
It really depends on the case. Some folks use the left-pad library for aligning, which can be done in 10 minutes. In C++, we have header libraries for thread pools, etc. I don't think implementing a fully functional thread pool with waiting and other features is an easy task. In conclusion, it really depends on the situation.
No it doesn't. The answer to needing a string manipulation function would be to use a string manipulation library that include that function, not one that is nothing but that finction.
If you don't need anything else, and having the linker not include unused code isn't good enough, then just vendor the single function.
There could still be some special case but that will need a lot of explaining to justify and will be such an exception that it is silly to talk about. There are legitimate one time freak exceptions to every principle. It means nothing.
> Javascript has a very unique set of challenges that differentiates itself from the rest of the programming world. The primary driving factor for its unique position is javascript is downloaded on the client's browser. Languages that run on the server, for example, don't need to send code that it runs onto client machines, they merely respond to requests made from the browser.
This is profoundly true. JavaScript written for the frontend has different "physics" to backend code.
It's not only code size that is significant. It's the fact that when you ship code over the wire to a client, you don't know what browser or even JS engine version will be interpreting it. Platform incompatibility has been a huge driver of issues in the JS/NPM ecosystem and has caused JS's culture to develop the way it has.
I wrote more about this, link in a top level comment.
Thought experiment: if a LLM can correctly produce the code for a micro-library like, say, leftpad... Should you call leftpad as a dependency or should you have the LLM generate that leftpad function for you?
And if the LLM ain't good enough to write leftpad, how can I trust it to write anything at all?
Where do you draw the line: is 10 lines "micro"? 50? 100? It is never quantified within, yet the very click-bait-y title relies on the term. How many bugs can hide in 50 lines or 100? And you really want to copy-paste that code at a static point in time?!
I think discussions like this mostly miss the point.
Obviously you want basic, stable and well documented functionality in your programming language.
But JavaScript does simply not have it. So how do you solve this dilemma?
1) the everything is an import way: use NPM and create a dependency hell from hell (requires Satan) made by Lucifer (same as Satan but different) using lava with fire (requires node v <= 9.42.0815) and heat (deprecated) requiring brimstone (only node v > 10.23) with a cyclic dependency on the Devil (incompatible with Satan).
2) the Golang way: copy paste ALL the things, only for your co worker to copy paste all the things again, only for your co worker to copy paste all the tings again, only for your...
Way 1 wastes your time when it breaks (sooner than later) but is necessary for non trivial functionality. Way 2 works only for trivial packages so choose your poison.
JavaScript (apart from not being a good programming language in general) is sorely missing a std lib.
One could argue that having a bad std lib is even even worse (PHP anyone?) but it is really hard to decide.
Sadly JavaScript is just unfit for the purpose it is being used for.
How is this the fault of the library? You chose the wrong one!
"This often cancels out the primary benefit of libraries. No, you don’t have to write the code, but you do have to adapt your problem to fit the library"
You evaluated the library, found is unsuitable and yet, it is somehow their fault.
Why on earth would you project your own failures on to someone else's code? You do you!
While I mainly agree with the author's substantive point, though I find some of the ways it's presented in this post not entirely convincing or fair, I am interested that someone else has identified this:
> I have talked a lot about the costs of libraries, and I do hope people are more cautious about them. But there’s one factor I left out from my previous discussion. I think there’s one more reason why people use libraries: fear.
> Programmers are afraid of causing bugs. Afraid of making mistakes. Afraid of missing edge cases. Afraid that they won’t be able to understand how things work. In their fear they fall back on libraries. “Thank goodness someone else has solved the problem; surely I never would have been able to.”
I think this is true, but why does the JS ecosystem seem to have "more fear" than for example the Python ecosystem?
I wrote about this a while ago. I think that actually JS does (or did) cause more fear in its developers than other programming languages. I described it as paranoia, a more insidious uncertainty.
Quoting myself[1]:
> There are probably many contributing factors that have shaped NPM into what it is today. However, I assert that the underlying reason for the bizarre profusion of tiny, absurd-seeming one-liner packages on NPM is paranoia, caused by a unique combination of factors.
> Three factors have caused a widespread cultural paranoia among JavaScript developers. This has been inculcated over years. These factors are: JavaScript's weak dynamic type system; the diversity of runtimes JavaScript targets; and the physics of deploying software on the web.
...
> Over the years there has been rapid evolution in both frontend frameworks and backend JavaScript, high turnover in bundlers and best-practises. This has metastasized into a culture of uncertainty, an air of paranoia, and an extreme profusion of small packages. Reinventing the wheel can sometimes be good - but would you really bother doing it if you had to learn all the arcane bullshit of browser evolution, IE8 compatibility, implementation bugs, etc. ad infinitum?
> And it's not just that you don't understand how things work now, or how they used to work - but that they'll change in the future!
I think the culture of JS has been reinforced over time, and the result is a novel form of paranoia. npm makes package-sharing easy, developers share trivial packages, people use trivial packages, people rationalize trivial packages, people teach beginners never to write code, beginners think they can never write code, beginners grow up and here we are.
Certainly the language is quirky, but it really doesn't change that much. Frameworks have come and gone but JavaScript itself is still the same. is-number would have looked much the same 15 years ago, if anyone was crazy enough to actually distribute it.
> “Thank goodness someone else has solved the problem; surely I never would have been able to.”
No, it's much more mundane: "Thank goodness someone else has solved the problem because I surely as hell don't want to solve it myself because I don't have either time or brain power/will/motivation for that". What is a number is JS? I don't even want to start thinking about it, just give me an isNumber() function. Why is it not in the standard libary in the first place?
The title of the article is "Micro-libraries need to die already". Renaming the submission to "Micro-libraries should never be used" is pathetic, Daniel. I'm not surprised though.
ristos|1 year ago
Another advantage is that because they're so minimal and self-contained, they're often "completed", because they achieved what they set out to do. So there's no need to continually patch it for security updates, or at least you need to do it less often, and it's less likely that you'll be dealing with breaking changes.
The UNIX philosophy is also build on the idea of small programs, just like micro-libraries, of doing one thing and one thing well, and composing those things to make larger things.
I would argue the problem is how dependencies in general are added to projects, which the blog author pointed out with left-pad. Copy-paste works, but I would argue the best way is to fork the libraries and add submodules to your project. Then if you want to pull a new version of the library, you can update the fork and review the changes. It's an explicit approach to managing it that can prevent a lot of pitfalls like malicious actors, breaking changes leading to bugs, etc.
foul|1 year ago
In JS and NPM they are a plague, because they promise to be a substitute for competence in basic programming theory, competence in JS, gaps and bad APIs inside JS, and de-facto standards in the programming community like the oldest operating functions in libc.
There are a lot of ways for padding a number in JS and a decent dev would keep an own utility library or hell a function to copy-paste for that. But no. npm users are taught to fire and forget, and update everything, no concept of vendoring (that would have made incidents like left-pad, faker and colors less maddening, while vendoring is even bolt in npm and it's very good!). They for years copy-pasted in the wrong window, really, they should copypaste blocks of code and not npm commands. And God helps you if you type out your npm commands because bad actors have bought the trend and made millions of libraries with a hundred different scams waiting for fat fingers.
By understanding that JS in the backend is optimizing for reducing cost whatever the price, becoming Smalltalk for the browser and for PHP devs, you would expect some kind of standard to emerge for having a single way to do routine stuff. Instead in JS-world you get TypeScript, and in a future maybe WASM. JS is just doomed. Like, we are doomed if JS isn't, to be honest.
porcoda|1 year ago
ivan_gammel|1 year ago
Regardless of how supposedly good or small is the library, the frequency at which you need to check for updates is the same. It doesn’t have anything to do with the perceived or original quality of the code. Every 3rd party library has at least the dependency on platform and platforms are big, they have vulnerabilities and introduce breaking changes. Then there’s a question of trust and consistency of your delivery process. You won’t adapt your routines based on specifics of every tiny piece of 3rd party code, so you probably check for updates regularly and for everything at once. Then their size is no longer an advantage.
> Copy-paste works, but I would argue the best way is to fork the libraries and add submodules to your project. Then if you want to pull a new version of the library, you can update the fork and review the changes.
This sounds “theoretical” and is not going to work at scale. You cannot seriously expect application level developers to understand low level details of every dependency they want to use. For a meaningful code review of merges they must be domain experts, otherwise effectiveness of such approach will be very low - they will inevitably have to trust the authors and just merge without going into details.
GuB-42|1 year ago
Submodules can work too, but do you really need these extra lines in your build scripts, extra files and directories, and the import lines just for a five line function? Copy-pasting is much simpler, with maybe a comment referring to the original source.
Note: there may be some legal reasons for keeping "micro-libraries" separate, or for not using them at all though but IANAL as they say.
Barrin92|1 year ago
The Unix philosophy is also built on willful neglect of systems thinking. The complexity of system isn't in the complexity of its parts but in the complexity of the interaction of its parts.
Putting ten micro-libraries together, even if each is simple, doesn't mean you have a simple program, in fact it doesn't even mean you have a working program, because that depends entirely on how your libraries play together. When you implement the content of micro-libraries yourself you have to be at the very least conscious not just of what, but how your code works, and that's a good first defense against putting parts together that don't fit.
alerighi|1 year ago
They have small programs, but that are not of different project. For example all the basic Linux utilities are developed and distributed as part of the GNU coreutils package.
It's the same of having a modular library, with multiple functions in them, that you can choose from. In fact the problem is that these function like isNumber shouldn't even be libraries, but should be in the language standard library itself.
tgv|1 year ago
But you need the functionality anyway, so there are two dependencies: on your own code, or on someone else's code. But you can't avoid a dependency, and it comes at a cost.
If you don't know how to code the functionality, or it will take too much time, a library is an outcome. But if you need leftPad or isNumber as an external dependency, that's so far in the other direction, it's practically a sign of incompentence.
reaperducer|1 year ago
This year I started learning FORTH, and it's very much this philosophy. To build a building, you don't start with a three-story slab of marble. You start with a hundreds of perfect little bricks, and fit them together.
If you come from a technical ecosystem outside the Unix paradigm, it can be hard to grasp.
bborud|1 year ago
kazinator|1 year ago
If you understand what is going on, paste it into your tree.
mattlondon|1 year ago
Well I think that is the point, they're not self-contained. You are adding mystery stuff and who knows how deep the chain of dependencies go. See the left-pad fiasco that broke so much stuff, because the chain of transitive dependencies ran deep and wide.
NPM is a dumpster fire in this regard. I try to avoid it - is there a flag you can set to say "no downstream dependencies" or something when you add a dependency? At least that way you can be sure things really are self-contained.
Toutouxc|1 year ago
jvanderbot|1 year ago
Mirco dependencies are a god damn nuisance, especially with all the transitive micro-dependencies that come along, often with different versions, alternative implementations, etc.
jaredsohn|1 year ago
I haven't done anything with this myself (just brainstormed a bit with chatgpt) but I wonder if the solution is https://docs.npmjs.com/cli/v10/commands/npm-ci
Basically, enforce that all libraries have lock files and when you install a dependency use the exact versions it shipped with.
Edit: Can someone clarify why this doesn't work? Wouldn't it make installing node packages work the same way as it does in python, ruby, and other languages?
mewpmewp2|1 year ago
prng2021|1 year ago
You could say that if all the popular web frameworks in use today were rewritten to import and use hundreds of thousands of pico-libraries, their codebase would be, as you say, composed of many high modular, self contained pieces that are easy to understand.
/s
wetpaws|1 year ago
[deleted]
unknown|1 year ago
[deleted]
oftenwrong|1 year ago
To reformulate the statement made in the intro of this post: "maybe it’s not a great idea to outsource _any critical_ functionality to random people on the internet."
It has long been a standard, best practice in software engineering to ensure dependencies are stored in and made available from first-party sources. For example, this could mean maintaining an internal registry mirror that permanently stores any dependencies that are fetched. It could also be done by vendoring dependencies. The main point is to take proactive steps to ensure your dependencies will always be there when you need them, and to not blindly trust a third-party to always be there to give your dependencies to you.
klabb3|1 year ago
Well everything is critical in the sense that a syntax error could break many builds and CI systems.
This is what lock files are for. If used properly, and the registry is available, there are no massive issues. This is how things are supposed work – all the tooling is made this way.
In short, I think the lessons from the leftpad debacle are (1) people don’t use existing versioning tooling, (2) there is a surprising amount of vendors involved if you look at dep trees for completely normal functionality and (3) the JS ecosystem is particularly fragmented with poor API discipline and non-existent stdlib.
EDIT: Just read up on it again and I misremembered. The author removed leftpad from NPM due to a dispute with the company regarding an unrelated package. That’s more of a mismanaged registry situation. You can’t mutate and remove published code without breaking things. Thus NPM wasn’t a good steward of their registry. If there’s a need to unpublish or mutate anything, there needs to be leeway and a path to migrate.
Brian_K_White|1 year ago
xg15|1 year ago
If you're particularly unlucky, the unused functionality pulls in transitive dependencies of its own - and you end up with libraries in your dependency tree that your code is literally not using at all.
If you're even more unlucky, those "dead code" libraries will install their own event handlers or timers during load or will be picked up by some framework autodiscovery mechanism - and will actually execute some code at runtime, just not any code that provides anything useful to the project. I think an apt name for this would be "undead code". (The examples I have seem were from java frameworks like Spring and from webapps with too many autowired request filters, so I do hope that is no such an issue in JS yet)
zahlman|1 year ago
Indeed. Several toy projects I've done were blown up in size by four orders of magnitude because of Numpy.
I only want multi-dimensional arrays that support reshaping and basic element-wise arithmetic, maybe matrix multiplication; I'm not even that concerned about performance.
But I have to pay for countless numerical algorithms I've never even heard of provided by decades-old C and/or FORTRAN projects, plus even more higher-math concepts implemented in Python, Numpy's extensive (and fragmented - there's even compiled code for testing that's outside of any test folders) test suite that I'll never run myself, a bunch of backwards-compatibility hacks completely irrelevant to my use case, a python-to-fortran interface wrapper generator, a vendored copy of distutils even in the wheel, over 3MiB of .so files for random number generators, a bunch of C header files...
[Edit: ... and if I distribute an application, my users have to pay for all of that, too. They won't use those pieces either; and the likelihood that they can install my application into a venv that already includes NumPy is pretty low.]
I know it's fashionable to complain about dependency hell, but modularity really is a good thing. By my estimates, the total bandwidth used daily to download copies of NumPy from PyPI is on par with that used to stream the Baby Shark video from YouTube - assuming it's always viewed in 1080p. (Sources: yt-dlp info for file size; History for the Wikipedia article on most popular YouTube videos; pypistats.org for package download counts; the wheel I downloaded.)
DonHopkins|1 year ago
I just refactored a bunch of python computer vision code that used detectron2 and yolo (both of which indirectly use OpenCV and PyTorch and lots of other stuff), and in the process of cleaning up unused code, I threw out the old imports of the yolo modules that we weren't using any more.
The yololess refactored code, which really didn't have any changes that should measurably affect the speed, ran a mortifying 10% slower, and I could not for the life of me figure out why!
Benchmarking and comparing each version showed that the yololess version was spending a huge amount of time with multiple threads fighting over locks, which the yoloful code wasn't doing.
But I hadn't changed anything relating to threads or locks in the refactoring -- I had just rearranged a few of the deck chairs on the Titanic and removed the unused yolo import, which seemed like a perfectly safe innocuous thing to do.
Finally after questioning all of my implicit assumptions and running some really fundamental sanity checks and reality tests, I discovered that the 10% slow-down in detectron2 was caused by NOT importing the yolo module that we were not actually using.
So I went over the yolo code I was originally importing line by line, and finally ran across a helpfully commented top-level call to fix an obscure performance problem:
https://github.com/ultralytics/yolov5/blob/master/utils/gene...
Even though we weren't actually using yolo, just importing it, executing that one line of code fixed a terrible multithreading performance problem with OpenCV and PyTorch DataLoader fighting behind the scenes over locks, even if you never called yolo itself.So I copied that magical incantation into my own detectron2 initialization function (not as top level code that got executed on import of course), wrote some triumphantly snarky comments to explain why I was doing that, and the performance problems went away!
The regression wasn't yolo's or detectron2's fault per se, just an obscure invisible interaction of other modules they were both using, but yolo shouldn't have been doing anything globally systemic like that immediately when you import it without actually initializing it.
But then I would have never discovered a simple way to speed up detectron2 by 10%!
So if you're using detectron2 without also importing yolo, make sure you set the number of cv2 threads to zero or you'll be wasting a lot of money.
franciscop|1 year ago
- Documentation: they are usually well documented, at least a lot better than your average internal piece of code.
- Portability: you learn it once and can use it in many projects, a lot easier than potentially copy/pasting a bunch of files from project to project (I used to do that and ugh what a nightmare it became!).
- Semi-standard: everyone in the team is on the same page about how something works. This works on top of the previous two TBF, but is distinct as well e.g. if you use Axios, 50% of front-end devs will already know how to use it (edit: removed express since it's arguably not micro though).
- Plugins: now with a single "source" other parties or yourself can also write plugins that will work well together. You don't need to do it all yourself.
- Bugs! When there are bugs, now you have two distinct "entities" that have strong motivation to fix the bugs: you+your company, and the dev/company supporting the project. Linus's eyeballs and all (yes, this has a negative side, but those are also covered in the cons in the article already!).
- Bugs 2: when you happen upon a bug, a 3rd party might've already found a bug and fixed it or offered an alternative solution! In fact I just did that today [1]
That said, I do have some projects where I explicitly recommend to copy/paste the code straight into your project, e.g. https://www.npmjs.com/package/nocolor (you can still install it though).
[1] https://github.com/umami-software/node/issues/1#issuecomment...
pton_xd|1 year ago
Copy-paste the code into your internal library and maintain it yourself. Don't add a dependency on { "assert": "2.1.0" }. It probably doesn't do what you actually want, anyway.
I think the more interesting point is that most projects don't know what they actually need and the code is disposable. In that scenario micro-libraries make some amount of sense. Just import random code and see how far you can get.
qwerty456127|1 year ago
I would prefer them to be built straight in the languages.
jdminhbg|1 year ago
flysand7|1 year ago
_xiaz|1 year ago
userbinator|1 year ago
I fail to comprehend how a single-function-library called "isNumber" even needs updating, much less "fairly frequently".
The debate around third-party code vs. self-developed is eternal. IMHO if you think you can do better than existing solutions for your use-case, then self-developed is the obvious choice. If you don't, then use third-party. This of course says a lot about those who need to rely on trivial libraries.
foul|1 year ago
If someone uses isNumber as a fundamental building block and surrogate for Elm or Typescript (a transpiler intermediate that would treat number more soundly I hope), this poor soul whom I deeply pity will encounter a lot of strange edge-cases (like that one stated in the article: NaN is a number or not?) and if they fear the burden of forking the library they will try to inflict this burden upstream, enabling feature or conf bloat.
I insinuate that installation of isNumber is, like most of these basic microlibs, a symptom of incompetence in usage of the language. A worn JS dev would try isNaN(parseInt(num+'')) and sometime succeed.
guestbest|1 year ago
consteval|1 year ago
Never underestimate the complexity and footgunny nature of JS' type system.
shiroiushi|1 year ago
PhilipRoman|1 year ago
KaiserPro|1 year ago
surely it can't be beyond the wit of programming kind to have a standard lib, or even layers of standard lib for Node?
What is the argument for not having a standard lib, apart from download speed?
qsort|1 year ago
When you put something in the standard library, it's harder to take it out, meaning that you're committing development resources to support the implementation. Furthermore things change: protocols and formats rise and fall in popularity and programming style evolves as the language changes (e.g. callbacks vs. promises in JS). Therefore the stdlib becomes where libraries go to die, and you'll always have a set of third party libraries that are "pseudo-standard", like NumPy in Python.
Having a minimal stdlib lets you "free-market" the decision, letting the community effects take care of what is considered standard in the ecosystem, and lets you optimize its minimal surface, like what happened with C.
kwhitefoot|1 year ago
I sometimes hanker for a return to Fortran IV where every routine was separately compiled and the linker only put into the object code those that were referred to by something else.
AdrianB1|1 year ago
I moved to option 3: in all my apps I include a function library that I build over the years, so I don't start from scratch every time. I deeply hate ("hate speech" example here) dependencies to libraries from all over the Internet due to security reasons, but I copy-paste code when needed to my library after I read, understand and check the code that I copy. The biggest advantage is that some of this code is better than what I could invent from scratch in a busy day and I save the time of doing it right. The disadvantage is there is no way to reward these authors that contribute to humankind.
PS. My function library has functions mostly written by me, over 80%, but it includes code written by others. In my case, every time I need a function I check my existing library first, then analyze whether to write or copy.
xmodem|1 year ago
edwinjm|1 year ago
jrpelkonen|1 year ago
hinkley|1 year ago
AndyKelley|1 year ago
This doesn't apply to micro-libraries, but it looks like that cost/benefit list is intended to cover libraries in general.
Lws803|1 year ago
quonn|1 year ago
SOLAR_FIELDS|1 year ago
I guess the opinion I'll share here is that I don't hear too many people arguing that the way embedded developers manage C libraries is at the forefront of how we should be handling and distributing code.
Joker_vD|1 year ago
Well, that's a proper use of SemVer, not sure why you put it against the library's author. I've personally been burned enough times by libraries that for some reason think that literally being unable to compile them is somehow a backwards-compatible change, so it's refreshing to see that some people actually understand that.
wakawaka28|1 year ago
gjsman-1000|1 year ago
Normally, packages are listed in my composer.json and stored in vendor/. For those packages, I created a separate folder called vendor_private/ which is part of my Git tree, put copies of these weird little packages in it, and set up my composer.json to consider that folder a repository.
Works like a charm. My big important packages are still upstream. I can customize the little ones as needed to fit better, or have better code, and not worry about them going unmaintained. It’s also way quicker than copying the files individually out of the package and into the right places (along with updating Namespaces, configuration, etc.) Once in a while, I’ll go back and see if anything worthwhile has changed upstream - and so far, it never has.
mirekrusin|1 year ago
I'm also an advocate, against crowd, of qualified imports as they help with refactoring (renames are propagated, especially in monorepos), readability/reviews (functions are qualified, you know where they're coming from) and overall coding experience – qualified module name followed by dot gives good autocompletion, imports look neat in larger projects etc. The codebase written like this resembles extended standard library. It also helps with solving problems by encouraging first principle thinking, bottom up coding that produces auditable codebase with shallow external dependencies etc.
[0] https://github.com/preludejs
morningsam|1 year ago
Using SNS as an example when it's neither micro nor a library but a service (and a huge abstraction over native push notifications, whereas most micro-libraries provide simple utilities that aren't very abstract), saying that complex libraries are harder to audit and hence a security risk (which should be a point in favor of micro-libraries that are small enough to audit in minutes), saying libraries might have large footprints (which is surely another reason to go for micro-libraries over all-you-could-possibly-need-libraries), saying transitive dependencies are bad, (yet again, this points towards an advantage of micro-libraries, which are less likely to have many dependencies), ... I don't know.
tptacek|1 year ago
"Would future updates be useful? No. The library is so simple that any change to the logic would be breaking, and it is already clear that there are no bugs."
Maybe what you want is a library ecosystem where things can be marked "this will never change". Something crazy happens and you actually need to update "is-number"? Rename it.
Of course, you can simulate that with a single large omnibus dependency that everyone can trust that pulls all these silly micro-libraries in verbatim.
unstable|1 year ago
Indeed you can, but it depends what isNumber does. This is more like what it should do IMO:
function isNumber( foo ) { return ( (typeof foo === "number") && (foo == foo)) || ((typeof foo === 'object') && (foo instanceof Number) ); }
And that is I think the value of micro libs, at least in JS, you don't want to think about all the edge cases when you only want to check if something is a Number.
crabmusket|1 year ago
But the broader point is, you can't outsource understanding to a package. There will be places in your code where NaN is a perfectly valid number, or Infinity. And other places where you absolutely need to be sure neither of the above make their way in.
By pretending that a package can capture the universal essence of "numberless", and that this will broadly apply across the entire JS ecosystem (see reported benefits like "different libraries can all rely on is-number instead of rewriting duplicated helper functions!") is naive.
I wrote more about this in a post linked in a top level comment. The is-promise library is another great example.
* Personal pet theory is that the package author would have been embarrassed to publish a 1-line package, so included "numeric strings are numbers" as a fig leaf to justify the package's existence. They should have instead created two new packages, is-actual-number and is-numeric-string, so the implementation of is-number could be nice and clean:
I can feel the power of webscale coursing through melayer8|1 year ago
IshKebab|1 year ago
In any case this is a bad example because Typescript exists.
ristos|1 year ago
https://2ality.com/2017/08/type-right.html
rc_kas|1 year ago
crabmusket|1 year ago
(At this point Nodejs is the defacto tooling ecosystem for even JS destined to run in a browser. You can't separate the two.)
edwinjm|1 year ago
n0tank3sh|1 year ago
Brian_K_White|1 year ago
If you don't need anything else, and having the linker not include unused code isn't good enough, then just vendor the single function.
There could still be some special case but that will need a lot of explaining to justify and will be such an exception that it is silly to talk about. There are legitimate one time freak exceptions to every principle. It means nothing.
qudat|1 year ago
However, if I can inline a small function, I will, so in that sense I agree.
crabmusket|1 year ago
This is profoundly true. JavaScript written for the frontend has different "physics" to backend code.
It's not only code size that is significant. It's the fact that when you ship code over the wire to a client, you don't know what browser or even JS engine version will be interpreting it. Platform incompatibility has been a huge driver of issues in the JS/NPM ecosystem and has caused JS's culture to develop the way it has.
I wrote more about this, link in a top level comment.
replete|1 year ago
TacticalCoder|1 year ago
And if the LLM ain't good enough to write leftpad, how can I trust it to write anything at all?
edfletcher_t137|1 year ago
IshKebab|1 year ago
Cheezmeister|1 year ago
Perhaps we should start there.
statictype|1 year ago
LordHeini|1 year ago
Obviously you want basic, stable and well documented functionality in your programming language.
But JavaScript does simply not have it. So how do you solve this dilemma?
1) the everything is an import way: use NPM and create a dependency hell from hell (requires Satan) made by Lucifer (same as Satan but different) using lava with fire (requires node v <= 9.42.0815) and heat (deprecated) requiring brimstone (only node v > 10.23) with a cyclic dependency on the Devil (incompatible with Satan).
2) the Golang way: copy paste ALL the things, only for your co worker to copy paste all the things again, only for your co worker to copy paste all the tings again, only for your...
Way 1 wastes your time when it breaks (sooner than later) but is necessary for non trivial functionality. Way 2 works only for trivial packages so choose your poison.
JavaScript (apart from not being a good programming language in general) is sorely missing a std lib.
One could argue that having a bad std lib is even even worse (PHP anyone?) but it is really hard to decide.
Sadly JavaScript is just unfit for the purpose it is being used for.
kazinator|1 year ago
Applications should never have trivial, tiny libraries as moving-target external dependencies.
If you must use a small library, bring it into the program.
stuaxo|1 year ago
edwinjm|1 year ago
The advantage: - everybody can contribute an npm package
The disadvantage: - everybody can contribute an npm package
mkoubaa|1 year ago
DonHopkins|1 year ago
Passive voice. WHO should never use micro-libraries?
bvisness|1 year ago
ptman|1 year ago
unknown|1 year ago
[deleted]
gerdesj|1 year ago
How is this the fault of the library? You chose the wrong one!
"This often cancels out the primary benefit of libraries. No, you don’t have to write the code, but you do have to adapt your problem to fit the library"
You evaluated the library, found is unsuitable and yet, it is somehow their fault.
Why on earth would you project your own failures on to someone else's code? You do you!
bvisness|1 year ago
crabmusket|1 year ago
> I have talked a lot about the costs of libraries, and I do hope people are more cautious about them. But there’s one factor I left out from my previous discussion. I think there’s one more reason why people use libraries: fear.
> Programmers are afraid of causing bugs. Afraid of making mistakes. Afraid of missing edge cases. Afraid that they won’t be able to understand how things work. In their fear they fall back on libraries. “Thank goodness someone else has solved the problem; surely I never would have been able to.”
I think this is true, but why does the JS ecosystem seem to have "more fear" than for example the Python ecosystem?
I wrote about this a while ago. I think that actually JS does (or did) cause more fear in its developers than other programming languages. I described it as paranoia, a more insidious uncertainty.
Quoting myself[1]:
> There are probably many contributing factors that have shaped NPM into what it is today. However, I assert that the underlying reason for the bizarre profusion of tiny, absurd-seeming one-liner packages on NPM is paranoia, caused by a unique combination of factors.
> Three factors have caused a widespread cultural paranoia among JavaScript developers. This has been inculcated over years. These factors are: JavaScript's weak dynamic type system; the diversity of runtimes JavaScript targets; and the physics of deploying software on the web.
...
> Over the years there has been rapid evolution in both frontend frameworks and backend JavaScript, high turnover in bundlers and best-practises. This has metastasized into a culture of uncertainty, an air of paranoia, and an extreme profusion of small packages. Reinventing the wheel can sometimes be good - but would you really bother doing it if you had to learn all the arcane bullshit of browser evolution, IE8 compatibility, implementation bugs, etc. ad infinitum?
> And it's not just that you don't understand how things work now, or how they used to work - but that they'll change in the future!
[1] https://listed.to/@crabmusket/14061/javascript-s-ecosystem-i...
bvisness|1 year ago
Certainly the language is quirky, but it really doesn't change that much. Frameworks have come and gone but JavaScript itself is still the same. is-number would have looked much the same 15 years ago, if anyone was crazy enough to actually distribute it.
Joker_vD|1 year ago
No, it's much more mundane: "Thank goodness someone else has solved the problem because I surely as hell don't want to solve it myself because I don't have either time or brain power/will/motivation for that". What is a number is JS? I don't even want to start thinking about it, just give me an isNumber() function. Why is it not in the standard libary in the first place?
nerdbert|1 year ago
Perhaps it's because so many JS developers - quite rightfully - suffer from impostor syndrome?
It's the language with the largest proportion of people who didn't set out to be programmers but somehow got mission-crept into becoming one.
nalgeon|1 year ago
arlojacek1|1 year ago
[deleted]
joshmarinacci|1 year ago
stevebmark|1 year ago
[deleted]
johnnyanmac|1 year ago
Why?