There's a reasonable case for including internal actors in one's threat model for larger companies or ones working in extraordinarily sensitive product domains. Most startups probably don't need to prevent the team from being able to read credentials, because that's theatre when they have 15 different ways to get to any secret the company has.
We use Ansible's vault feature to decrypt a few centralized secret files onto machines at deploy time. This lets us commit the encrypted text of the files. (The source of truth for the key is in Trello, IIRC, but it could be anywhere you have to auth in as an employee to view.)
It's modestly annoying (operations like "check what changed in the secret configuration file as a result of a particular commit" are impossible) but seems like a reasonable compromise to ensure that e.g. nobody can insta-create an admin session if they happen to have a copy of our codebase and a working Internet connection.
Secrets are communicated to processes which need them in boring Linux-y ways like "file i/o" and "stuff it into an environment variable that the process has access to." If you're capable of doing file i/o or reading arbitrary memory, we're in trouble. Of course, if you can do either of those on our production infrastructure and also connect to our database, we've already lost, so I don't see too much additional gain in locking down our database password.
If you're starting from the position "I have a Rails app which has passwords in cleartext in database.yml" this is an easy thing to roll out incrementally: move the password from database.yml to ENV['RAILS_DB_PASSWORD'], spend ~5 minutes getting your deployment infrastructure to populate that from an encrypted file (details depend on your deployment infrastructure -- I am liking ansible, a lot, for this), verify it works, then change passwords. Voila; Github no longer knows your database password and your continuous integration system no longer knows your production credentials. One threat down; zero coordination required with any other system you use or any other team at the company. You can standardize on this across your entire deployment or not, your call, and it's exactly as easy to back out of as it was to get started.
> or ones working in extraordinarily sensitive product domains.
A side project I'm working on comes under that domain (medical data), By any chance do you have any recommendations of books on this kind of stuff? I do all the usual OWASP/best practice stuff but most of my day job is LoB stuff and while security is important it's not quite the same as losing potentially thousands of peoples medical data.
This article confuses me. The author tears down a strawman argument about running centralized key services ("The expensive solution"), then recommends exactly such a solution in Amazon KMS.
The only plausible way this can make sense to me is if he said "Running your own key service is a pain, use Amazon KMS". But that's a simple service question, probably wouldn't have taken up as much space.
Not only that, but everything here except for the engineers question can also be solved by simply hosting these things yourself.
You don't need 3rd party code hosting on Github, just use Gitlab or JIRA. You don't need some external CI service, run your own Jenkins node. Chat and email should also be internal (we use XMPP, a local Mattermost instance would be an alternative) and SSL-only.
You can do all of this with basically 1 docker command per install on your own dedicated hardware with a fairly underpowered machine.
And this prevents leaking of all sorts of information, not just production database passwords. If you don't trust your engineers, you have bigger problems, as another poster pointed out, if they can modify your software to simply report the password back to them, or just login to production and decrypt it, you're dead in the water.
KMS is not a centralized secret database -- it's a hosted Hardware Security Module. There is no way to store your service's secrets in it for later retrieval, unlike the solutions listed in the article. I suppose an argument could be made that it still provides a single point of failure, however the risk level of KMS and the SLA it provides is far lower than what one might encounter by maintaining their own server cluster.
An interesting article, I'm working on a side project/long term project that will hold medical data, it will be self selected (i.e. people entering their own data rather than a gov dept etc) however security is #1 on my list since frankly the idea of leaking someones medical data (even if they opted in and agreed to the license) scares the living shit out of me.
All my side reading recently has been on writing high(er, I follow best practices with my other stuff) security systems across the entire stack, it still frightens me but I see a real need for the side project so I'm going to do everything I can to make it as secure as possible and take a shot.
Depending on the data you're storing, you may be responsible for HIPAA compliance. Such a thing is possible on AWS[0], but is not provided out-of-the-box.
The recommended solution is still vulnerable to employee compromise: if they can push software that runs as a trusted role, they can steal any secrets that software has access to.
This is certainly the case, however for an organization implementing best practices for code deployment, such a change would have to be peer-reviewed in the best case, or pushed directly to master with an obvious paper trail in the worst. It wasn't my intention to imply that employing well-designed envelope encryption would shut the door on any possibility of an engineer gaining access to secrets; clearly there's a lot more involved in making that happen. However, this goes a long way to allowing the source of any leaks to be traced should they occur.
One solution we came up was to encrypt data before it is submitted and let the user have the private key. The private key is never transferred to our servers. (Generated on browser, kept by the user and used on the browser.)
http://www.jotform.com/encrypted-forms/
I really like this solution, but it is still quite vulnerable to an inside job. To wit, if someone at jotform wanted they could poison the page, and recover the private key (or the data directly).
To address that you need process isolation between the storage of the cyphertext and the manipulation and use of cleartext. This eliminates the browser since for all intents and purposes it is not an isolated process. (You could still use the browser, but provide your tools as an extension that would, presumably, be inspected by users when it updated.)
That said, your solution takes care of a lot of other threat models, but it doesn't really protect users from you.
Every time I see some one touts AES as the reason that their encryption is secure, I want to ask, in what mode? CBC, CFB, CTR or (the best) GCM? How is the IV generated? Are there any potential padding oracles? If they don't even understand these questions, then it is obvious that AES cannot save them at all.
Everything here except for the engineers question can also be solved by simply hosting these things yourself. You don't need 3rd party code hosting on Github, just use Gitlab or JIRA. You don't need some external CI service, run your own Jenkins node. Chat and email should also be internal (we use XMPP, a local Mattermost instance would be an alternative) and SSL-only. You can do all of this with basically 1 docker command per install on your own dedicated hardware with a fairly underpowered machine. And this prevents leaking of all sorts of information, not just production database passwords.
For an organization requiring the highest available security, the ideal solution would be a privately operated hardware security module kept off the DMZ. However, that, as well as the idea of self hosting (and maintaining) the entire dev, test, deploy, and prod stack suggested by another commenter, isn't always within reach of a small, agile team looking to focus on their core competencies.
One could argue that it's possible for Amazon to have falsified the description of KMS as an HSM, or the certifications[0] they were granted for it, but I'd retort that an organization in a position to seriously question those claims shouldn't be using a remote solution anyway.
So, making the more rational assumption that such claims by Amazon can be trusted, their offering is quite secure: the HSM does not allow the export of any key, and exposes only the ability to load encrypted data into the device and have it produce the decrypted result over a secure channel, and vice versa.
Looking at the docs, it looks like the master key source is pluggable, so you don't have to use Amazon's KMS... but none of the other options inspire confidence (local file, fetch from URL, plaintext password, or no password).
At the very least, I'd like to see a plugin for using a key stored on a local TPM chip -- which almost any modern bare-metal server would be equipped with.
[+] [-] patio11|10 years ago|reply
We use Ansible's vault feature to decrypt a few centralized secret files onto machines at deploy time. This lets us commit the encrypted text of the files. (The source of truth for the key is in Trello, IIRC, but it could be anywhere you have to auth in as an employee to view.)
It's modestly annoying (operations like "check what changed in the secret configuration file as a result of a particular commit" are impossible) but seems like a reasonable compromise to ensure that e.g. nobody can insta-create an admin session if they happen to have a copy of our codebase and a working Internet connection.
Secrets are communicated to processes which need them in boring Linux-y ways like "file i/o" and "stuff it into an environment variable that the process has access to." If you're capable of doing file i/o or reading arbitrary memory, we're in trouble. Of course, if you can do either of those on our production infrastructure and also connect to our database, we've already lost, so I don't see too much additional gain in locking down our database password.
If you're starting from the position "I have a Rails app which has passwords in cleartext in database.yml" this is an easy thing to roll out incrementally: move the password from database.yml to ENV['RAILS_DB_PASSWORD'], spend ~5 minutes getting your deployment infrastructure to populate that from an encrypted file (details depend on your deployment infrastructure -- I am liking ansible, a lot, for this), verify it works, then change passwords. Voila; Github no longer knows your database password and your continuous integration system no longer knows your production credentials. One threat down; zero coordination required with any other system you use or any other team at the company. You can standardize on this across your entire deployment or not, your call, and it's exactly as easy to back out of as it was to get started.
[+] [-] noir_lord|10 years ago|reply
A side project I'm working on comes under that domain (medical data), By any chance do you have any recommendations of books on this kind of stuff? I do all the usual OWASP/best practice stuff but most of my day job is LoB stuff and while security is important it's not quite the same as losing potentially thousands of peoples medical data.
[+] [-] unknown|10 years ago|reply
[deleted]
[+] [-] timdierks|10 years ago|reply
The only plausible way this can make sense to me is if he said "Running your own key service is a pain, use Amazon KMS". But that's a simple service question, probably wouldn't have taken up as much space.
[+] [-] ultramancool|10 years ago|reply
You don't need 3rd party code hosting on Github, just use Gitlab or JIRA. You don't need some external CI service, run your own Jenkins node. Chat and email should also be internal (we use XMPP, a local Mattermost instance would be an alternative) and SSL-only.
You can do all of this with basically 1 docker command per install on your own dedicated hardware with a fairly underpowered machine.
And this prevents leaking of all sorts of information, not just production database passwords. If you don't trust your engineers, you have bigger problems, as another poster pointed out, if they can modify your software to simply report the password back to them, or just login to production and decrypt it, you're dead in the water.
[+] [-] TomFrost|10 years ago|reply
[+] [-] noir_lord|10 years ago|reply
All my side reading recently has been on writing high(er, I follow best practices with my other stuff) security systems across the entire stack, it still frightens me but I see a real need for the side project so I'm going to do everything I can to make it as secure as possible and take a shot.
[+] [-] TomFrost|10 years ago|reply
[0]: https://aws.amazon.com/compliance/hipaa-compliance/
[+] [-] timdierks|10 years ago|reply
[+] [-] TomFrost|10 years ago|reply
[+] [-] aytekin|10 years ago|reply
[+] [-] javajosh|10 years ago|reply
To address that you need process isolation between the storage of the cyphertext and the manipulation and use of cleartext. This eliminates the browser since for all intents and purposes it is not an isolated process. (You could still use the browser, but provide your tools as an extension that would, presumably, be inspected by users when it updated.)
That said, your solution takes care of a lot of other threat models, but it doesn't really protect users from you.
[+] [-] netheril96|10 years ago|reply
[+] [-] ultramancool|10 years ago|reply
[+] [-] swehner|10 years ago|reply
(Besides, one would assume has been backdoored by the Amazon staffers anyway)
[+] [-] TomFrost|10 years ago|reply
For an organization requiring the highest available security, the ideal solution would be a privately operated hardware security module kept off the DMZ. However, that, as well as the idea of self hosting (and maintaining) the entire dev, test, deploy, and prod stack suggested by another commenter, isn't always within reach of a small, agile team looking to focus on their core competencies.
One could argue that it's possible for Amazon to have falsified the description of KMS as an HSM, or the certifications[0] they were granted for it, but I'd retort that an organization in a position to seriously question those claims shouldn't be using a remote solution anyway.
So, making the more rational assumption that such claims by Amazon can be trusted, their offering is quite secure: the HSM does not allow the export of any key, and exposes only the ability to load encrypted data into the device and have it produce the decrypted result over a secure channel, and vice versa.
[0]: https://aws.amazon.com/kms/details/#compliance
[+] [-] tjohns|10 years ago|reply
Looking at the docs, it looks like the master key source is pluggable, so you don't have to use Amazon's KMS... but none of the other options inspire confidence (local file, fetch from URL, plaintext password, or no password).
At the very least, I'd like to see a plugin for using a key stored on a local TPM chip -- which almost any modern bare-metal server would be equipped with.
[+] [-] SatoshiRoberts|10 years ago|reply