top | item 31526044

NPM security update: Attack campaign using stolen OAuth tokens

272 points| todsacerdoti | 3 years ago |github.blog

126 comments

order
[+] tommoor|3 years ago|reply
> Using their initial foothold of OAuth user tokens for GitHub.com, the actor was able to exfiltrate a set of private npm repositories, some of which included secrets such as AWS access keys.

> Using one of these AWS access keys, the actor was able to gain access to npm’s AWS infrastructure.

How many individual best practices were not followed to result in this nightmare? Sigh.

[+] dlsa|3 years ago|reply
All of them to at least some extent and several we will soon have names for because they are still emerging.
[+] jeremymcanally|3 years ago|reply
> Using their initial foothold of OAuth user tokens for GitHub.com, the actor was able to exfiltrate a set of private npm repositories, some of which included secrets such as AWS access keys.

Keep those keys out of source control, folks. There are a lot of options for secrets management these days, and making it harder for attackers to totally own you if they only manage to crack one piece of your infrastructure is key to limiting damage from this sort of attack.

[+] joshstrange|3 years ago|reply
I agree fully with you but I don't think secret management is as easy/cheap as some people pretend. On AWS, for example, each secret you store is $0.40 + $0.05 per 10,000 API calls and that can add up if you only have 1 api key/password/etc per "Secret" (for an individual at least, I hate bleeding off $10+/mo to store tiny bits of strings). Then, once you have the secret stored, you need setup roles/policies to be able to retrieve them.

I have this setup pretty well in my code now but getting there wasn't simple or easy from my perspective and keeping the list of secrets your IAM user can access up to date can be a pain as well.

I'm working in a lambda environment so my options might be more limited but I'm interested to see how other people are solving this issue (maybe specifically for small/side projects). As it stands my lambdas all get a role applied to them that gives them access to the secrets but something not AWS-specific would need a "bootstrap secret" to be injected before the code could call out to the third-party to get the other secrets. For Lambdas I suppose I could inject that "bootstrap secret" in via environmental variables but now I've got a new issue to deal with. Injecting at build time via something like GitHub Actions Secrets is an option I guess.

All that to say, while I agree secrets should never be in source, in practice it's not super easy (I'd love to be proven wrong, maybe I'm not doing it right).

[+] urda|3 years ago|reply
A rule of thumb is: if the key is pushed into a git remote you should consider it compromised and roll a new key.
[+] dgb23|3 years ago|reply
This is also a thing that will not get past some interviewers.
[+] willcipriano|3 years ago|reply
I've never had this problem but I thought of a partial solution. Say you have you unit tests and they are using the same auth and logging mechanisms as prod. Create a user with a password like "ThisStringIsAPassword1234" and run the unit tests having them output logs to the disk. Then see if the logs contain that value.

Anybody ever do something like that? How effective it would be probably depends on unit test coverage.

You could also probably just do the same thing in prod with a dummy user.

[+] er4hn|3 years ago|reply
That is one thing that RFC8959 is intended to solve. If you see "secret-token:" in any logs after running tests you flag that as a problem and fail the test.
[+] danpalmer|3 years ago|reply
I agree it should be this simple, but I'd bet they have tests like this and unfortunately it's never quite this simple in a production system.

I've always wanted to apply strong type systems to this problem – wrapping sensitive data in types that do not have the ability to be printed to logs would theoretically allow you to know after type-checking that passwords can't be output to logs. However again I think this is wishful thinking as a password needs to be sent somewhere at some point, and that creates places where issues can occur.

[+] tmlb|3 years ago|reply
I've worked on a service that handled credentials where we added tests like this to try to catch if a log statement gets added containing the username/password. We used a few end to end tests rather than attempting to include something like this is the unit tests for every function.

Our tests would set up the app's full context, get a hook into the logging framework to watch for log statements, then make requests to the service containing a set of dummy credentials, like { username: "foo", password: "bar" }. If a log statement containing "foo" or "bar" was detected the test failed.

It's not going to catch every type of issue, but at least some potential footguns can be preventing this way.

[+] chaps|3 years ago|reply
I saw this at a.. scary-large.. company I was at in 2014ish. If a client changed their password, it would get logged into a log file, plaintext. I asked a coworker why they did this, and he said it was to tell the client their password. Hm.

They did actually patch it before I got there though.. but they didn't get rid of the years-old log files with the passwords. Found them while trying to find the root password (unsuccessfully) for a host that we couldn't reboot. The ones I tested still worked.

I wouldn't be surprised if something similar happened here. Old log files in backups and such.

[+] richardfey|3 years ago|reply
You say "attack campaign", I say bad habits catching up with secrets scanners *and* someone noticing it. Black hats might have been exploiting this already in the past.
[+] jabiko|3 years ago|reply
https://github.blog/2022-05-26-npm-security-update-oauth-tok...

> Using their initial foothold of OAuth user tokens for GitHub.com, the actor was able to exfiltrate a set of private npm repositories, some of which included secrets such as AWS access keys.

So NPM was storing AWS secrets in their (private) git repos. IMHO that was an accident waiting to happen.

[+] philsnow|3 years ago|reply
> Unrelated to the OAuth token attack, we also recently internally discovered the storage of plaintext credentials in GitHub’s internal logging system for npm services

This isn't the first time GitHub has found logging of plaintext credentials [0]; it's not a good look, for a company with the resources that GitHub has, to have to disclose it again almost exactly 4 years later.

[0] https://www.zdnet.com/article/github-says-bug-exposed-accoun...

[+] tinus_hn|3 years ago|reply
This happens all the time in a lot of companies, you just don’t hear about it.
[+] blip54321|3 years ago|reply
This isn't uncommon. For example, when MIT/Harvard took over edX from the original researcher who built it, they found they didn't know how to build software. The new team introduced this same issue: passwords in log files.

It was fixed much later. You can look over the git logs.

I'm not sure this is so much of a "how good this happen," but a "thank you for being transparent." Most organizations cover this sort of thing up, like MIT/Harvard (and now 2U).

Good of github to announce this openly!

[+] junon|3 years ago|reply
Top 10 maintainer here, got a few emails this morning about it.

Meh. Shit happens. If we all Pikachu face every time an exploit happens we are lying to ourselves. We'll never reach perfect security. It's a pipe dream.

What matters more is the disclosure and response. I'm not a huge advocate of npm personally, but I respect their response to this thus far. From what I gather (the email was a bit long-winded) nothing vastly detrimental occurred, they automatically invalidated passwords and going to publish again next time will require a couple minutes tops of extra work. I'll take it.

Let's all stop acting like products need to be perfectly and eternally secure. That's not how threat modelling works, any security professional knows that's impossible, and it's unfair to expect that from anyone, including big corporations.

Npm has done a lot of relevant and good work toward their security efforts over the years, in some cases going a bit far even in my own opinion. The comments I've seen so far have been a bit unfair.

[+] capableweb|3 years ago|reply
If this was a clever hack, I'd agree with you, shit happens.

But for gods sake, having secrets hardcoded in VCS??

You seem to understand threat modelling. What's the threat towards one of the biggest and most used package registry?

It's not like npm Inc just started running the registry. They have been doing this for years. To let such a beginner mistake risk the supply chain of basically the entire JS ecosystem is not only sloppy, it's completely unprofessional.

What can we do in the short-term? I'm not sure, but I hope smarter people than me comes up with some solutions ASAP before a compromise like this starts actually impact developers using npm.

[+] goodpoint|3 years ago|reply
This is why we should use packages from well-known Linux distributions instead of npm/pip/cargo etc.

I know the available libraries are a fraction of the ecosystem, but very often it's a good enough fraction if you are willing to be flexible in your choices.

[+] noodlesUK|3 years ago|reply
Serious question: what’s the difference? Do Linux distros actually audit packages very much? Supply chain attacks in Linux distros are pretty scary, as you can basically expect them to run install scripts as root.
[+] pmontra|3 years ago|reply
A fraction and based on my old Ruby gems memories, fairly out of date.

I remember that somebody posted a reply to a comment of mine here on HN years ago saying that s/he was rebuilding every single gem as deb package before deploying it in production and that was the only sensible way to do. I don't think it adds much to security unless they also read all the code, but it's a lot of work that none of my customer is going to pay me for. I also probably don't want to start a profession of deb builder for Ruby gems.

[+] maxloh|3 years ago|reply
You cannot download React from apt/yum AFAIK.
[+] dodgerdan|3 years ago|reply
Will this be met with a shrug from the JS community? Or is this the come to Jesus moment for the JS supply chain?
[+] ratww|3 years ago|reply
What do you expect the community to do?

Stop using thousands of packages? Start vetting packages, as if security was important?

There are already dozens of us saying it is possible to not have too many dependencies, and vet packages before installing. But every time we open our mouths we are treated as if we just escaped some sort of insane asylum.

At a place I worked in the past we used to have a 40-line microservice using plain-node without any dependencies. That was by design. One junior dev took it upon himself, in their spare time, to convert the whole thing to use some js MVC framework, complete with a full-blown build process, transpilers, and all the nine yards. There was a big discussion in the PR and a lot of juniors complained that we should migrate because they "didn't learn plain node.js in college".

We can't have nice things anymore.

[+] VoidWhisperer|3 years ago|reply
For once, I don't think this highlights an issue exclusively specific to JS.. this could've happened to any package system that Github owned when the attacker was able to pivot after accessing the private repos.
[+] dotancohen|3 years ago|reply

  > Will this be met with a shrug from the JS community? 
Go read the comment that begins "Top 10 maintainer here". Not even a shrug.
[+] rmbyrro|3 years ago|reply
Was this incident facilitated by something inherent in the JS ecosystem? I have the impression it wasn't.

The JS ecosystem sucks, but anyway, not particularly their fault in this case.

[+] EnKopVand|3 years ago|reply
I’m of a bit of an opposite mind on the many, and usually very public, NPM security issues. Because from my experience the JS ecosystem, and it’s woes, teach a lot of people to never trust the part of their operation that is coming from someone else. I mean not everyone, obviously, but in my anecdotal experience it’s far more common to see good package control and review processes for JS than any other language, well except for maybe Python when the Python is done by software engineers and not “data-scientists”.

Supply chain security is immensely important, and I encourage you not to learn about it the hard way like I did. Which somewhat ironically happened in the .Net ecosystem when one of our trusted Nuget packages got hacked many years ago. Now, I could be mistaken and I hope I am, but I suspect that if you ask a Java, a JS and a C# developer if they trust their ecosystem, then only one of them is likely to say yes.

So no, there won’t be some great revelation in the JS community. The best you can hope for with stories like these is that fewer developers feel like imposters when they realise that GitHub stores plaintext security assets in their logs.

[+] stolenmerch|3 years ago|reply
As a member of the JS community: shrug. I revoked my OAuth apps on Github, changed my passwords. The tarballs are unaffected. Not worried.
[+] tuxie_|3 years ago|reply
What would you expect the JS community do after this? What would you do?
[+] SkyPuncher|3 years ago|reply
Yes, because this fundamentally wasn't an attack against NPM or any specific package manager. This stemmed from a breach at Heroku.
[+] glenngillen|3 years ago|reply
Did I miss the previous GitHub announcements about this amidst all the noise about how badly Heroku handled their part of this problem? Or have GitHub been sitting on the specific facts they had a database, email, and hash passwords leaked for over a month now?
[+] ralph84|3 years ago|reply
I take it you’ve never been involved in a breach investigation. Figuring out what an attacker had access to and whether they exploited that access isn’t trivial, especially for a heavily used service like npm. To say they were “sitting on” information while probably tens if not hundreds of engineers assisted in making sure the investigation was complete and accurate is uncharitable.
[+] fomine3|3 years ago|reply
Reminder: npm is now owned by GitHub
[+] classified|3 years ago|reply
On one hand it's good to have an organization with lots of engineering resources behind these, but I have to wonder whether that much stuff in one place (Microsoft in this case) won't bite us down the line.
[+] eximius|3 years ago|reply
And this is one reason why client side hashing is a good idea (in addition to other procedures).

Even if you screw up, the impact is so much less severe.

[+] 01acheru|3 years ago|reply
If you do client side hashing then the hash becomes the password, it only helps with password reuse on other services. From your service perspective the security issue doesn't change much.