remosi | 10 years ago | on: GCE down in all regions
remosi's comments
remosi | 12 years ago | on: Google acknowledges XKCD #1361
In that blog post, and in a technical note sent to data protection authorities the same day, we said that while Google did collect publicly broadcast SSID information (the WiFi network name) and MAC addresses (the unique number given to a device like a WiFi router) using Street View cars, we did not collect payload data (information sent over the network). But it’s now clear that we have been mistakenly collecting samples of payload data from open (i.e. non-password-protected) WiFi networks, even though we never used that data in any Google products.
-- http://googleblog.blogspot.co.uk/2010/05/wifi-data-collectio...
remosi | 12 years ago | on: Google acknowledges XKCD #1361
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
When you say "you can get new keys" which is true (although startssl appears to be the fly in this particular ointment), browsers don't validate CRLs, so the old keys are still just as valid as the new ones. Which makes getting new keys potentially worthless.
This is providing similar protections for your TLS keys to what your database server already applies.
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
However, I don't think X.509 supports the concept of CA certs being limited to signing only subdomains (could be wrong), and you have a large industry that prefers the status quo of you having to pay them for each cert you mint.
This ends up with ridiculous things like tying payment to the lifetime of the certificate, which allows for things like "2 year certs", which are obviously less secure than 2×1 year certs.
But having your server roll it's cert every 12 hours from a more secure cert elsewhere would be a very nice feature.
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
Linux has Gnome-keyring, which, amongst other interfaces, operates as a PKCS#11 softhsm (I think), but it still runs as your user.
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
All the complexity in this proposal is the serialisation/deserialisation which is about the same amount of work if it's pkcs#11 or some custom thing.
Custom API: Pro: Marginally simpler to implement. Pro: If the webserver fork()'s it by default, then more users get the benefit for the case that you can read the webserver memory. Con: Doesn't protect against attacks that can read files readably by the webserver. Con: Becomes complicated when you want to move to a real HSM. Con: Isn't reusable between webservers, let alone for your mail server, xmpp server, webbrowsers, ssh clients and so on.
Using PKCS#11: Pro: Can start with a PKCS#11 softhsm running as a seperate user today, migrate to hardware HSM with little change tomorrow. Pro: Reusable across multiple webservers, already usable by browsers and ssh clients. Pro: A well defined, maintained, open standard with a wide variety of implementations that already exist. Con: Slightly more complex than a custom protocol, but I'd argue that the custom protocol would grow to cover at least what PKCS#11 supports. I'm currently investigating using dbus for the protocol, so serialisation/deserialisation is mostly taken care of.
Am I missing some Pro for a custom protocol?
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
opendnssec unfortunately is a little... industrial strength. It takes some time and consideration to configure unlike bind's "gimme the keys and I'll just take care of it for you" approach.
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
However, my personal webserver isn't a bank. Not everyone can justify spending this much money on a HSM to get this level of assurance. What I'm proposing is a simpler solution that isn't robust against sophisticated attacks (eg when the attacker manages to get root), but is far more robust to some classes of the common attacks we see today (where the attacker can read any memory/file that the webserver has permissions to see).
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
PKCS#11 has a few irritants, but it's a fairly sensible API. and it's already implemented by many things (browsers, gnome-keyring, ssh, ...). OpenSSL, GnuTLS at least both support it via one mechanism or another, my only real complaint from the webserver side is that the configuration knobs aren't really plumbed through.
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
opencryptoki has a softhsm too, but again, it appears to run in process. Same problems.
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
Cookies are remarkably sensitive, but they can be far more easily rotated. I can make sure that every cookie is rotated transparently every day or so and leave that running as a sensible background precaution. If we had infrastructure that let us renew our TLS keys every 24 hours or so, this wouldn't be such a big deal (it would still be a big deal, but not quite as bad as it is today). But TLS keys have an expiry of usually years.
remosi | 12 years ago | on: Webservers shouldn't have direct access to keys
While webserver's support for PKCS#11 is annoying, it's well supported by lots and lots of other stuff (usually client side stuff like ssh, browsers etc tho). You can get webservers to do PKCS#11 today, there are docs on how to do it. They usually start with "download the source, and run configure with this pile of options."
remosi | 12 years ago | on: 4.2.2.2: The Story Behind a DNS Legend
remosi | 12 years ago | on: 4.2.2.2: The Story Behind a DNS Legend
remosi | 12 years ago | on: Google encrypts data amid backlash against NSA spying
Includes in the output: Server public key is 2048 bit ... Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES128-GCM-SHA256 (ie: not RC4, as long as your client supports non RC4 ciphers, uses ECDHE for PFS) and: TLS session ticket lifetime hint: 100800 (seconds) (session keys are discarded by the client every 1d4h, so presumably the server rotates them every 24 hours or so (4hrs to allow for clock skew, I assume, or to allow for the fact that people might be slightly late on something they check every 24 hours (eg when the wake up each morning)))
Nobody is going to make the change from 1024 bit keys to something else without first verifying that the new bit length is "secure enough" for a reasonable enough time (if nothing else, you don't want to have to go through the expense of the process of getting everything upgraded more often than you have to). Although you're right, it would be nice if they published their reasoning.
I don't know how to verify the security of hangouts. Looking at the webrtc standard, it doesn't appear to support encryption. There is also a lot of opposition to standardising encryption for webRTC because of "DRM" concerns. So I guess it's probably not encrypted, but don't quote me on that.
Disclaimer: I'm a Google employee.
This did impact common infrastructure. Some (non-cloud) Google services were impacted. We've spent years working on making sure gigantic outages are not externally visible for our services, but if you looked very closely at latency to some services you might have been able to see a spike during this outage.
My colleagues managed to resolve this before it stressed the non-cloud Google services to the point that the outage was "revealed". If this was not mitigated, the scope of the outage would have increased to include non-cloud Google services.