top | item 7320833

Npm's Self-Signed Certificate is No More

215 points| rcconf | 12 years ago |blog.npmjs.org | reply

128 comments

order
[+] diminoten|12 years ago|reply
There's a comment on the blog post that I thought was very good, it was from Rob, and it ends with:

"Please, try harder or get forked. Not sure how else to say that."

[+] Pacabel|12 years ago|reply
It's disappointing to see unnecessary and unjustifiable censorship of perfectly legitimate comments like that.

I think it has become more of an issue in general as the discussion of software projects has moved away from mailing lists and newsgroups to blogs and other more centralized discussion forums.

I recently saw similar censorship happen at Opera's blogs. I would read some of the comments one evening, and there'd be some good discussion about features that the newer versions of Opera's desktop browsers are still missing. They weren't always positive comments, but they'd be relevant and remarkably civilized given the circumstances. Then I'd read the same blog post a day or two later, and entire discussion threads full of useful content would be gone.

Criticism and negative discussion is often the most valuable that there is. It often highlights real problems that need to be solved. To delete such commentary is not at all helpful, and actually probably quite harmful.

[+] seldo|12 years ago|reply
Hi! As I said in the comment's on Rob's own post: we didn't censor any comments. We did no moderation of any kind on any comments today; we were way too busy trying to fix the problem. I don't know what happened to Rob's comment, but it was nothing anyone at npm did.
[+] cjbprime|12 years ago|reply
Seems to have been moderated away by the blog owners. Classy. Here's the comment under discussion:

==

Hey. Crazy kids. This probably needs to be that one event where you sort of realize: "Oh. Shit. Other people...like...use this & stuff. We need a damn road map and a release schedule. Stop smoking dabs all day, breh."

Now is also a great time to learn how to think about the potential ramifications a production push will have prior to making said production push. And, if your change might impact some or perhaps even all of the other people who use your technology, then some degree of coordination - perhaps an email? - would be nice. It's one of those things that will help make you look professional. I suck at professionalism. You have no idea. But, even I know this much.

Because, right now, I sort of feel like I'm asking some very rightfully fearful people to consider entrusting perhaps their actual career into the development of technology they need to succeed and thrive. And, I just started recommending Node.js - with a caveat - that npm basically sucks. I hate having to do that and it needs to stop.

So, here we are.

Your words continue to be one thing, and your actions continue to be quite another. If it is even possible to break a tool like this, that tool is not enterprise grade. If there is nothing that can be done to successfully insulate a tool from unexpected behavior like this, then that tool scores less in evaluations that consider the risk of using it.

npm, at this point, has more going against it in the discussion than going for it right now. Events like this are, in the grand context, very significant and telling. They are also ill-timed. Because big, important decisions are trying to happen right now regarding the use of Node.js. It is literally on the cusp of going mainstream. And, that seems to be generating some pressure that at least one team (npm) doesn't seem to be equipped to handle.

So, before you find yourself facing a community that forks instead of trying to work with you, I would like to just make a simple recommendation. In the future, you seriously need to sit and think about the potential ramifications of a production push. And...this is the important part...if those changes are going to have a wide impact on your users - send some sort of email WELL IN ADVANCE. A flippant blog post the day of is not Doing It Right™.

Because, and I feel like I might not only be speaking for myself, I'm not going to allow the promise of Node.js to be voided by the lackluster and problematic performance of its weird bolt-on archive service. Someone, perhaps even me (as in: today), will simply replace you with a workable, decentralized solution that enterprise can specialize to purpose and communities can use to grow and thrive.

If you have any questions, ask somebody. Anybody. If you're struggling with some concept of enterprise grade operations, what people expect of you and how you can succeed with events like this in the future, I'm positive every capable person here would provide some level of guidance and support. We want you to succeed.

Please, try harder or get forked. Not sure how else to say that.

Best regards, -Rob

[+] saurik|12 years ago|reply
Who is Rob? (Was it simply attributed to "Rob", or am I supposed to know who Rob is? I do not do much with node.js, and Google searches for "rob node.js" are not pulling up anyone terribly canonical-looking; mostly just some one-off talks, and from node.js only people who work at companies I also haven't heard of that use node.js.)
[+] emily37|12 years ago|reply
If anyone is wondering what the actual change was:

It looks like the npm registry used to have a certificate signed by npm's own CA, and existing npm clients only trust that CA by default, not the normal list of verisign, digicert, etc. (Trusting versign et al would defeat the point of using their own CA.) The signing key for that CA is pretty darn important, and maybe there are entities other than npm, inc who might know it (i.e. nodejitsu).

So npm, inc rolled out a new cert that looks to be signed by digicert, but existing clients don't trust Digicert until you explicitly configure them to.

I was thrown off by the SELF_SIGNED_CERT_IN_CHAIN error; I expected some error about an untrusted root CA if the problem was that npm clients didn't trust digicert, but apparently SELF_SIGNED_CERT_IN_CHAIN is what OpenSSL returns when the root CA isn't loaded.

[+] mivok|12 years ago|reply
If the clients trust the npm CA, can't they just sign the digicert CA with that CA and include it in the certificate chain provided by the server? That way the chain would be:

    npm CA -> digicert CA -> any other intermediates -> server cert
Clients that only trust the digicert CA (and other standard CAs) will see that and accept it because they trust the digicert CA, and clients that trust the npm CA will trust the cert also, allowing both old and new clients to work. Once (almost) everyone has upgraded, the npm root CA can be removed from the chain presented by the server. Am I missing something here?

Edit: It looks like what I'm missing is that you'd need the private key of the digicert CA to generate the request to sign with the npm CA. I was thinking about how CAs have been migrated in the past (e.g. equifax to geotrust global CA). It looks like it won't work in this case.

Edit2: Actually, it appears to work after all. I just tested with the openssl ca command, and you give it -ss_cert instead of -in for the certificate to sign a certificate instead of a request.

[+] thwarted|12 years ago|reply
I was thrown off by the SELF_SIGNED_CERT_IN_CHAIN error; I expected some error about an untrusted root CA if the problem was that npm clients didn't trust digicert, but apparently SELF_SIGNED_CERT_IN_CHAIN is what OpenSSL returns when the root CA isn't loaded.

All root CAs are self-signed, that's what makes them root. What overrides the self-signing being an error is it being listed in the CA list available to the client which is updated out of band.

[+] rcconf|12 years ago|reply
This has also broken all of our own deploys on Jenkins. We had to use solution 2, and then upgrade, because solution 1 also produces the certificate error.

It was surprising that npm status didn't have any sort of warning. I had to find out about this via twitter. I think it's extremely irresponsible and I'm glad we've started to move away from node.js and started to use Java.

[+] benajnim|12 years ago|reply
That's not a loaded comment.. Changing your programming environment over an SSL certificate? Tell us all about how awesome it is building apps in Java!
[+] tootie|12 years ago|reply
Ditto. Team wasted a few hours on this today.
[+] RyanZAG|12 years ago|reply
Probably goes down as one of the worst large scale blunders of this type given the sheer amount of people affected. It's actually fairly insane that node.js relies on npm like this - isn't it only a matter of time before one of the core node packages gets compromised and someone gets root access to thousands of servers and dev boxes?
[+] jdlshore|12 years ago|reply
Although npm ships with node, the problems aren't because of that.

People have (foolishly, in my opinion) chosen to make npm an integral part of their deployment process, which is why this change has broken a lot of people's deployments. They're going against the official npm recommendation [1], which is to check your dependencies into your source repository and not use npm in deployment scripts. (A good idea with any package manager, imo. [2])

Not that I'm excusing npm; a change like this seems like something they should have taken more carefully.

[1] Npm recommends checking product dependencies into your repo: https://npmjs.org/doc/faq.html#Should-I-check-my-node_module...

[2] I explain why checking dependencies into your repo is a good idea: http://www.letscodejavascript.com/v3/comments/live/2#comment...

[+] seiji|12 years ago|reply
Yum (and I seem to recall up2date) have had certificates revoked before (due to compromises on the upstream packaging system) requiring everybody to re-import a good certificate before updates will work again.

Now, system updates are a different beast than tying your development process into Magic Hosted Things In Internetland, but certificates on things have been changed before and they will be changed again. Just have to keep aware of how your systems work and what they depend on, which goes against modern "javascript with a double large lack of knowledge" development.

[+] rhizome|12 years ago|reply
If you're desperate for something, anything, then npm is what you tend to wind up with.
[+] quarterto|12 years ago|reply
> core node packages

I assume you're talking about http etc.. These do not depend on npm and have nothing to do with npm.

[+] atonse|12 years ago|reply
I'm surprised that this wasn't one of those "hey in 60 days we'll be making this change and it will break x, y, and z. Be prepared." sort of things.
[+] IgorPartola|12 years ago|reply
Doesn't matter. When you download Node.js itself, you are doing so over HTTP. https://nodejs.org/download/ (note the httpS) does not work. Hope you enjoy being a part of that botnet :)

Edit: I used to email project owners about issues like this, but never seem to get any response, so I stopped. The worst one, in terms of losing $$ was Pandora. I keep trying to sign up for the paid service, but the form to put in your credit card details is served over HTTP. I emailed them several times with no response.

[+] chill1|12 years ago|reply
If you are so concerned about downloading anything via HTTP, why not always use a VPN? At least that way you aren't susceptible to local MITM. Just make sure you trust your VPN :)

Edit: Oh, plus you can always use a checksum to verify your download packages [1] :)

[1] http://nodejs.org/dist/v0.10.26/SHASUMS.txt

[+] leobelle|12 years ago|reply
Regarding pandora, the form post is over HTTPS:

https://www.pandora.com/radio/jsonp/v35

And you can load the form, which is just an HTML input form with some sensitive data (last 4!), over HTTPS. It sucks that HTTPS is not default though.

[+] randunel|12 years ago|reply
When package managers misbehave, the whole software bundle suffers. Maybe the Node.JS project should have more (decision) power over its package manager. Node maintainers properly announce api version changes sometimes months in advance, so you don't need to read through the pr/issues on github or mailing lists to stay up to date.

I really don't think an advance notice would have hurt. So much for automation.

[+] agilebyte|12 years ago|reply
Well, nodejitsu are trademarking NPM so let's see how that plays out.
[+] pi-rat|12 years ago|reply
Go home NPM, you're drunk!

(add all the latest drama, and you're basically just asking to be forked..)

[+] parris|12 years ago|reply
Solution 1: Upgrading npm actually does nothing and also complains about a cert error.

Solution 2: is a terrible idea for people who have their own private registries or proxy caches. Also a security issue in general...

This leaves you with needing to upgrade node, which may or may not be possible due to operating system/platform constraints. Luckily we were using this https://launchpad.net/~chris-lea/+archive/node.js/ which makes life on ubuntu ok.

We set up our own proxy cache, at this point im thinking about increasing the cache time to something like 2 weeks.

[+] yogo|12 years ago|reply
The inmates are running the asylum :)

But it does provide a case study for the rest of us on what or what not to do.

[+] thirsteh|12 years ago|reply
Did they previously have client-side validation of the self-signed certificate, and now they changed it to "accept any cert signed by a root CA"? If so, why in the world would they do that?

Maybe I'm being hopelessly optimistic by not jumping to the conclusion that they did no validation whatsoever before?

[+] awj|12 years ago|reply
If running "npm config get ca" on my not-yet-updated copy of npm is any indication, they were using the self signed certificate.
[+] ibash|12 years ago|reply
As per Joe Grund: http://blog.npmjs.org/post/78085451721/npms-self-signed-cert...

npm config set strict-ssl false npm update npm -g npm config set strict-ssl true

[+] awj|12 years ago|reply
That's ... still a dangerous way to accomplish the update. You're disabling a key safeguard that ensures you're actually talking to official npm servers, then re-enabling the safeguard after running the code that whoever-you-got-it-from provided.

I'm sure the likelihood of experiencing a MITM attack here is low, but the security consideration needs to be as widely advertised as the solution.

[+] honksillet|12 years ago|reply
I was just about to post this link.

The change broke bower at on my laptop. The above link fixed it.

[+] imslavko|12 years ago|reply
Can someone explain what they actually did? What was the technical change that was made?

I can't figure out from the blog post whether the change was in the npm server or client.

[+] iLoch|12 years ago|reply
FYI I was able to bypass this last night by temporarily switching to the EU NPM mirror.

    npm --registry http://registry.npmjs.eu/ install blah
[+] purephase|12 years ago|reply
I was bitten by this yesterday and I didn't know WTH was up. This is good to know. I had two or three package installs work flawlessly and then they started failing randomly across servers (all staging, so it wasn't that big of an issue).

I think Rob's comment sums up my feelings. A bit more aggressively, but the same gist.

[+] 2mur|12 years ago|reply
Basically don't ever deploy from npm. Tarballs
[+] darkarmani|12 years ago|reply
This is exactly why I don't understand why it is called "package management." What good is NPM if you just need to deploy tarballs anyway? It's good for bloating dependencies and installing 10 of the same library. It completely punts on actually managing packages, by just copying everything -- recursively.
[+] andrewmunsell|12 years ago|reply
This is also causing an issue with Dokku-- I'm trying to deploy a new Node app and it fails because of the certificate issue. Anyone know how to update the Buildpack to the latest version? Heroku already implemented a fix.
[+] agilebyte|12 years ago|reply
Yes I do :)

You can see on the following step which image is being used when deploying an app:

https://github.com/progrium/dokku/blob/master/dokku#L33

So what I did was change that to an image of my own based on the progrium/buildstep one:

  sudo docker run -i -t progrium/buildstep /bin/bash
Now you can make changes to it and save them into your new image (while you are still logged in to progrium/buildstep):

  sudo docker commit <id> myownimage
OK, and inside this image I have referenced my own buildstep instead of the default Heroku one. The relevant txt document is in /build/.

And to get to the point I have changed the npm install line to run pointing to the http EU repo.

  npm install --registry "http://registry.npmjs.eu" --userconfig $build_dir/.npmrc --production 2>&1 | indent
That is one way to go about it.
[+] STRML|12 years ago|reply
FYI, npm is rolling back to an older cert that they used a few years back. It's a GlobalSign cert, and the GlobalSign CA has been in npm since Aug 2012. When it propagates their global CDN, old clients will be fixed without breaking new ones and they can focus on a more permanent solution.

1. https://news.ycombinator.com/item?id=7323093

[+] jevinskie|12 years ago|reply
What broke? Running the npm command? Running any nodejs program? The blog post gives few details about what people were doing when they hit the issue.
[+] mmorris|12 years ago|reply
You can't npm install <foo> without getting an error message about the self-signed cert. At least until you update npm, which you usually do with npm update -g npm, but that will also fail with the same error. Fun!

The blog post doesn't mention deleting the config change after updating, which (I'm not an expert) might be important. I think this is a full fix: http://stackoverflow.com/a/22099006/2708274