remosi's comments

remosi | 10 years ago | on: GCE down in all regions

(I'm a Google SRE, I'm on the team that dealt with this outage)

This did impact common infrastructure. Some (non-cloud) Google services were impacted. We've spent years working on making sure gigantic outages are not externally visible for our services, but if you looked very closely at latency to some services you might have been able to see a spike during this outage.

My colleagues managed to resolve this before it stressed the non-cloud Google services to the point that the outage was "revealed". If this was not mitigated, the scope of the outage would have increased to include non-cloud Google services.

remosi | 12 years ago | on: Google acknowledges XKCD #1361

Nine days ago the data protection authority (DPA) in Hamburg, Germany asked to audit the WiFi data that our Street View cars collect for use in location-based products like Google Maps for mobile, which enables people to find local restaurants or get directions. His request prompted us to re-examine everything we have been collecting, and during our review we discovered that a statement made in a blog post on April 27 was incorrect.

In that blog post, and in a technical note sent to data protection authorities the same day, we said that while Google did collect publicly broadcast SSID information (the WiFi network name) and MAC addresses (the unique number given to a device like a WiFi router) using Street View cars, we did not collect payload data (information sent over the network). But it’s now clear that we have been mistakenly collecting samples of payload data from open (i.e. non-password-protected) WiFi networks, even though we never used that data in any Google products.

-- http://googleblog.blogspot.co.uk/2010/05/wifi-data-collectio...

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

https://www.lorier.net/docs/tpm are my notes with experimenting with the TPM in my T530. The trick is that the TPM will protect itself fairly aggressively, so before you start turn off the laptop, unplug the power and battery (if possible), and on the FIRST boot after you put eveything back together, go into the BIOS and clear the TPM. If the menu option isn't there, then you probably have to power everything off :)

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

Your content is often not in the webservers user, it's often stored in a SQL or NoSQL database somewhere. Various access controls can be applied there. But your right, unfortunately this isn't a 100% magic pixie dust solution to everything.

When you say "you can get new keys" which is true (although startssl appears to be the fly in this particular ointment), browsers don't validate CRLs, so the old keys are still just as valid as the new ones. Which makes getting new keys potentially worthless.

This is providing similar protections for your TLS keys to what your database server already applies.

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

This would be ideal. One of the problems with heartbleed has been that while you can revoke your cert and mint a new one, browsers don't check CRLs so they'll continue to trust the old compromised cert.

However, I don't think X.509 supports the concept of CA certs being limited to signing only subdomains (could be wrong), and you have a large industry that prefers the status quo of you having to pay them for each cert you mint.

This ends up with ridiculous things like tying payment to the lifetime of the certificate, which allows for things like "2 year certs", which are obviously less secure than 2×1 year certs.

But having your server roll it's cert every 12 hours from a more secure cert elsewhere would be a very nice feature.

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

But not out of user. If I can run code as your user, I can attempt to retrieve those keys, although I assume MacOS prevents you from attaching a debugger to the keychain.

Linux has Gnome-keyring, which, amongst other interfaces, operates as a PKCS#11 softhsm (I think), but it still runs as your user.

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

Having looked at PKCS#11, I'm not sure what bits you could get away with not implementing. It does have functions for things like "get random bytes", which I guess you might not want, but that's just barely any code: (int get_random_bytes() { return CKR_NOT_SUPPORTED; }).

All the complexity in this proposal is the serialisation/deserialisation which is about the same amount of work if it's pkcs#11 or some custom thing.

Custom API: Pro: Marginally simpler to implement. Pro: If the webserver fork()'s it by default, then more users get the benefit for the case that you can read the webserver memory. Con: Doesn't protect against attacks that can read files readably by the webserver. Con: Becomes complicated when you want to move to a real HSM. Con: Isn't reusable between webservers, let alone for your mail server, xmpp server, webbrowsers, ssh clients and so on.

Using PKCS#11: Pro: Can start with a PKCS#11 softhsm running as a seperate user today, migrate to hardware HSM with little change tomorrow. Pro: Reusable across multiple webservers, already usable by browsers and ssh clients. Pro: A well defined, maintained, open standard with a wide variety of implementations that already exist. Con: Slightly more complex than a custom protocol, but I'd argue that the custom protocol would grow to cover at least what PKCS#11 supports. I'm currently investigating using dbus for the protocol, so serialisation/deserialisation is mostly taken care of.

Am I missing some Pro for a custom protocol?

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

Yup, that pretty much sums it up. I'm currently trying to figure out if dbus could be that serialisation since it takes care of a reasonable amount of the hard work for you. But I'm no expert on GObject, so slow going. (Also, I'm not sure that I'm the best person to be writing this... I don't really have that much security knowledge, I just spent a whole pile of time trying to figure out how to secure my (client) keys recently and wondered why we didn't do something sensible for server keys.

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

Yup. But when you have a successful attack you should consider what alternatives you have to make sure that never happens again. You might dismiss them since their cost:benefit might not be favourable. If this works, I doubt many people are going to deploy it by default, since the cost:benefit doesn't pay off for them. But it might pay off for some other people who are really pissed off right now.

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

The technique of not having the keys available to the process that's dealing in external bits works really well for DNSSEC. There's a program called opendnssec which takes care of keys, rotating them, and .... accesses them via PKCS#11. So you can use Hardware Security Module, or a softhsm. Since it's opendnssec that's doing the rotation of keys, that can run as a different user than your DNS server, so the fact that softhsm runs as a shared library is less of an issue.

opendnssec unfortunately is a little... industrial strength. It takes some time and consideration to configure unlike bind's "gimme the keys and I'll just take care of it for you" approach.

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

If I was running a bank, I'd hopefully use a proper HSM. You ask it to generate a private key, you then ask it for the public key, get it signed into a cert, and use that. The HSM promises to never give out the private key to anyone (including the administrator), usually in a tamper evident way (if someone did manage to extract the key, you'd notice). Even if you have root on a machine that has an HSM plugged into it, you can't get the private keys out.

However, my personal webserver isn't a bank. Not everyone can justify spending this much money on a HSM to get this level of assurance. What I'm proposing is a simpler solution that isn't robust against sophisticated attacks (eg when the attacker manages to get root), but is far more robust to some classes of the common attacks we see today (where the attacker can read any memory/file that the webserver has permissions to see).

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

The major reason is that when your website becomes popular, and becomes more of a target, you can swap out the software hsm daemon with a more sophisticated hardware solution, if implemented properly, by just changing a pkcs11: url[1] to point at the new HSM.

PKCS#11 has a few irritants, but it's a fairly sensible API. and it's already implemented by many things (browsers, gnome-keyring, ssh, ...). OpenSSL, GnuTLS at least both support it via one mechanism or another, my only real complaint from the webserver side is that the configuration knobs aren't really plumbed through.

[1]: http://tools.ietf.org/html/draft-pechanec-pkcs11uri

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

You're right, this doesn't solve 100% of the problem. If I could solve 100% I'd be creating a startup...

Cookies are remarkably sensitive, but they can be far more easily rotated. I can make sure that every cookie is rotated transparently every day or so and leave that running as a sensible background precaution. If we had infrastructure that let us renew our TLS keys every 24 hours or so, this wouldn't be such a big deal (it would still be a big deal, but not quite as bad as it is today). But TLS keys have an expiry of usually years.

remosi | 12 years ago | on: Webservers shouldn't have direct access to keys

There are several softhsm's, they just share the address space with your frontline daemon which (IMHO) defeats the purpose.

While webserver's support for PKCS#11 is annoying, it's well supported by lots and lots of other stuff (usually client side stuff like ssh, browsers etc tho). You can get webservers to do PKCS#11 today, there are docs on how to do it. They usually start with "download the source, and run configure with this pile of options."

remosi | 12 years ago | on: Google encrypts data amid backlash against NSA spying

</dev/null openssl s_client -showcerts -connect www.google.com:443

Includes in the output: Server public key is 2048 bit ... Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES128-GCM-SHA256 (ie: not RC4, as long as your client supports non RC4 ciphers, uses ECDHE for PFS) and: TLS session ticket lifetime hint: 100800 (seconds) (session keys are discarded by the client every 1d4h, so presumably the server rotates them every 24 hours or so (4hrs to allow for clock skew, I assume, or to allow for the fact that people might be slightly late on something they check every 24 hours (eg when the wake up each morning)))

Nobody is going to make the change from 1024 bit keys to something else without first verifying that the new bit length is "secure enough" for a reasonable enough time (if nothing else, you don't want to have to go through the expense of the process of getting everything upgraded more often than you have to). Although you're right, it would be nice if they published their reasoning.

I don't know how to verify the security of hangouts. Looking at the webrtc standard, it doesn't appear to support encryption. There is also a lot of opposition to standardising encryption for webRTC because of "DRM" concerns. So I guess it's probably not encrypted, but don't quote me on that.

Disclaimer: I'm a Google employee.

page 1