I hate to be the negative guy, and they were hashing passwords better than 90% of the sites, but it would be SO easy to completely neutralize password leakage when the attacker only has access to the database.
tl;dr: Hardcode a second salt in your application code or in an environment variable. Then a database dump is not enough anymore to do any kind of bruteforce.
It's simple, free and you can retroactively apply it.
Is there any significant evidence that peppering passwords helps? I've seen arguments for and against peppering out on the big bad internet. Everyone has opinions but there are few people's opinions about crypto that I actually trust.
The best article I've seen against this technique is by ircmaxell [0]. Nicely summed up in this sentence "It is far better to use standard, proven algorithms then to create your own to incorporate a pepper."
Anyone have source material (academic paper, Bruce "The Crypto God" Schneier blog post) that shreds some light on peppering passwords?
I'd be much more interested in how many iterations of bcryprt Slack were using. That has a much bigger bearing on events for me. Anyone at Slack know/want to answer that question?
We have an approach to this which doesn't modify the password hashing at all, so it can't possibly reduce the strength: we store users and password hashes (currently bcrypt * ) in two separate tables, and the key used to look up a given user's password hash is an encrypted version of their user id. The encryption key for doing this transformation is loaded dynamically at app startup and has some extra security precautions around storing it, so it's not baked into either the source code or a config file.
The upshot is that if you get a db dump and are able to brute force some bcrypt hashes, you won't know what usernames they go with. If you get a db dump and our source code, you're still out of luck. If you get ahold of an old server hard drive, you're out of luck. If you root a running server and inspect the process memory, you can obtain this key.
This scheme also allows the mapping key to be rolled, which would immediately invalidate all passwords in the system.
*we also version our password hash records so we can migrate from bcrypt to a new scheme fairly painlessly if it's warranted in the future.
I generally disagree with hardcoded salts, you should assume everything is compromised in a successful attack. But I'm actually commenting here because I don't see how you can retroactively apply the second salt to a hashed string. Could you please elaborate or share a link?
Later edit: I'm referring to your example in your link:
salt = urandom(16)
pepper = "oFMLjbFr2Bb3XR)aKKst@kBF}tHD9q" # or,
getenv('PEPPER')
hashed_password = scrypt(password, salt + pepper)
store(hashed_password, salt)
hashed_password = scrypt(password, salt + pepper)
store(hashed_password, salt)
Why aren't you storing the (N, r, p) parameters too? Is it because your library's "scrypt" function automatically encodes all the parameters it uses in the returned string? (This is not hypothetical; it's exactly what many scrypt implementations do.) If so, congratulations: You just stored the pepper bits in your database, because they're part of the "hashed_password" value.
The best summary I've read recently on peppering was posted to the PHC list by Thomas Pornin last week. It's worth quoting in its entirety;
Adding an additional secret key can be added generically, in (at least)
four ways, to any password hashing function:
1. Store: salt + HMAC_K(PHS(pass, salt))
2. Store: salt + PHS(HMAC_K(pass), salt)
3. Store: salt + AES_K(PHS(pass, salt))
4. Store: salt + PHS(AES_K(pass), salt)
I have used here "HMAC" to mean "some appropriate MAC function" and
"AES" to mean "some symmetric encryption scheme".
These methods are not completely equivalent:
-- With method 1, you forfeit any offline work factor extension that
the PHS may offer (i.e. you can no longer raise the work factor of a
hash without knowing the password). With methods 2 and 4 such work
factor extension can be done easily (if the PHS supports it, of
course). With method 3, you can do it but you need the key.
-- With methods 2 and 4, you must either encode the output of HMAC
or AES with Base64 or equivalent; or the PHS must support arbitrary
binary input (all candidates should support arbitrary binary input
anyway, it was part of the CfP).
-- Method 4 requires some form of symmetric encryption that is either
deterministic, or can be made deterministic (e.g. an extra IV is
stored). ECB mode, for all its shortcomings, would work.
-- Method 3 can be rather simple if you configure PHS to output exactly
128 bits, in which case you can do "raw" single-block encryption.
-- Methods 1 and 3 require obtaining the "raw" PHS output, not a
composite string that encodes the output and the salt. In that sense,
they can be a bit cumbersome to retrofit on, say, an existing bcrypt
library.
The important points (in my opinion) to take into account are:
1. This key strengthening (some people have coined the expression
"peppering" as a bad pun on "salting") can be done generically; the
underlying PHS needs not be modified or even made aware of it.
2. Keys imply key management, always a tricky thing. Key should be
generated appropriately (that's not hard but it can be botched in
horrible ways), and stored with care. Sometimes the OS or programming
framework can help (e.g. DPAPI on Windows). Sometimes it makes things
more difficult. You need backups (a lost key implies losing all the
stored passwords), but stolen backups are a classical source of
password hashes leakage, so if you do not take enough care of the
security of your backups then the advantage offered by the key can go
to naught.
3. For some historical reasons, many people feel the need to change
keys regularly. This is rather misguided: key rotation makes sense in
an army or spy network where there are many keys, and partial
compromissions are the normal and expected situation, so a spy network
must, by necessity, be in permanent self-cleansing recovery mode; when
there is a single key and the normal situation is that the key is NOT
compromised, changing it brings no tangible advantage. Nevertheless,
people insist on it, and this is difficult. The "method 3" above
(encryption of the PHS result) is the one that makes key rotation
easiest since you can process all stored hashes in one go, as a
night-time administrative procedure.
4. Key strengthening makes sense only insofar as you can keep the key
secret even when the attacker can see the hashes. In a classical
Web-server-verifies-user-passwords context, the hashes are in the
database; one can argue that database contents can be dumped through a
SQL injection attack, but a key stored outside the database might evade
this partial breach. But if the key is in the database, or the breach
is a stolen whole-server backup, then the key does not bring any
advantage.
5. If you _can_ store a key that attackers won't steal, even if they
get all the hashes, then you can forget all this PHS nonsense and just
use HMAC_K(pass) (or HMAC_K(user+pass)). The key must thus be
envisioned as an additional protection, a third layer (first layer is:
don't let outsiders read your hashes; second layer is: make it so that
your hashes are expensive to compute, in case the first layer was
broken through).
P.S. As the inventor of blind hashing (which would have prevented this breach entirely) I have a serious horse in this race. We launch publicly at RSA 2015 in San Francisco. Hope to see you there!
I don't see that as a negative suggestion: that's a fantastic idea, and for all we know, a Slack employee will read your post, and make their hashing even better. :-)
That's a nice trick, I've read about it elsewhere but never used, will do for sure in the future!
Are you sure that the intruder did not had server access? I mean the info: "We were recently able to confirm that there was unauthorized access to a Slack database storing user profile information." is not enough to deduce that this was an SQL injection (although might very well be).
I still wonder what value does password leak have. I changed my password, my old password was: DV1wn3yHk6W-8m9lZNo_ now you all know it, so what? I don't care, I believe you don't care either. On the other hand, if they were after valuable data, they had access to database and the got what they wanted. So the password is much less valuable than the other stuff they might have wanted. Like chat logs which might contain credentials to other services.
>"If you have not been explicitly informed by us in a separate communication that we detected suspicious activity involving your Slack account, we are very confident that there was no unauthorized access to any of your team data (such as messages or files)."
Under their FAQ on the post. It could be inferred that there was some unauthorized access to certain users' communication logs?
The post notes that the breached database is the user table, which would not contain chat history. I agree that making this abundantly clear makes sense.
I am an application security professional, and I created this account in order to make this post after reading many of the comments on this thread.
Many of the comments have great suggestions. However, very few talk about the most important part of creating mitigations and designing key management/crypto. What is the security target?
Before throwing new designs at a problem, the attackers and attack vectors must be defined. If you don't know who you are guarding against and what they will do (and what data they will steal), then how can you possibly say what is a good mitigation??
One might argue that the threat is obvious, but I'll guarantee you that there are dozens of threats here. List them. Prioritize them. Then mitigate them. It is helpful to fully understand the problem/solution space before jumping in with pepper's, salt's, extra databases, and solutions.
It's refreshing to 1) see a breach notification including the actual password hashing algorithm, 2) see they're using a strong one like bcrypt (presumably with a reasonable cost factor).
Regardless, this is an example of why cloud communication (and ticketing and database off-loading [see MongoHQ] and...) systems probably won't ever become commonplace in most of the government space and the finance and health sectors.
I think this just goes to show exactly why these systems will become more commonplace. There are only so many security experts to go around. Having all the very best concentrated on a smaller set of services seems like it makes more sense than trying to get a security expert for every service.
> Regardless, this is an example of why cloud communication (and ticketing and database off-loading [see MongoHQ] and...) systems probably won't ever become commonplace in most of the government space and the finance and health sectors.
I agree. We might not like rolling out our own instances, but it prevents hackers from being able to grab ALL THE DATA in one fell swoop. It really amazes me that some EHR systems have gone the cloud route.
How does one discover that they were hacked? The post states that the breach occurred during February, and this is the end of March... did it just take them a long time to react and write a post about it, or did they likely discover after the fact? If so, how?
Surprisingly, they didn't force a password reset on all accounts. Even though the passwords are hashed and salted, targeting a couple users and checking for weak passwords can now be done offline, with no rate-limiting or network calls necessary. In breaches like these, it should still be mandatory to issue service wide password resets. Anything less is unacceptable.
If I were Slack, I would pretend to get hacked. Slack critics often point to its centralized architecture as a weak point, because rational corporations should not entrust security of their internal communications to a third party. Particuarly when that third party aggregates communications of its many clients, it becomes a target of hacking. Why hack a single corporation when you can hack Slack and get all their clients at once?
This is a valid criticism. Slack can do all it can to mitigate security risk. But at the end of the day, there is always at least one vulnerability, somewhere.
As Slack matures as a company, it needs an answer to this criticism. Because security is so naturally unpredictable, it would be disingenuous for Slack to respond with anything resembling "our security is perfect." Because, of course, as we see time and time again, no security is perfect.
Now that Slack has captured the low-hanging-fruit of the market, it needs to pick the high-hanging-fruit. The most profitable clients for slack will be the largest, conservative, enterprise clients who will join the Slack platform and then never leave. The long term survivability prospects of Slack depend on capturing these large enterprise customers.
Strategically, Slack needs to find a response to the criticism that its security is prohibitively weak, so that it can convince these large enterprises to join its platform.
Perhaps, the best response to security criticism is that "we got hacked, but our internal policies mitigated any cascading effects and customer data remains safe." [0] [1] So would it be in Slack's best interest to stage a hack on itself? Or to report a hack occurred when it really didn't?
It seems feasible that by setting precedent for its reaction to a hack, Slack has a chance to demonstrate the competence of its security team. Now investors can point to this incident as one handled well by the security team. In a world where, unfortunately, corporations will always get hacked, Slack was able to survive with some dignity.
[0] or, as safe as it can possibly be according to computer science.
So a database gets hacked, they add MFA and people are arguing about peppering passwords. What about the part on how the hackers got access to the database in the first place?
Passwords are not the only sensitive info that can be stored in a database and most of the time, that info isn't hashed.
I would be more interested in how the hacker got access to their DB and nothing else. Maybe the DB is remotely accessible (unlikely) or there is SQLi vuln. in Slack.
One thing that bugged me about this today was that after I changed my password on desktop, my mobile session wasn't invalidated. Apparently it's an option for mass password resets, but it really should be mandatory.
Possibly no because the token is app-specific and it's not likely that the slack key was compromised, but I also agree that they should have disclosed that.
> Slack’s hashing function is bcrypt with a randomly generated salt per-password which makes it computationally infeasible that your password could be recreated from the hashed form.
I'm happy to hear they didn't just use MD5 with no salt as this would be the same as storing it in plane text...
bcrypt + random salt sounds to me like the best practice nowadays, is it still holding? or are there some advanced in GPU cluster costs on EC2 that make even bcrypt hackable. I think I heard something that it has a way to "adapt" to advances in computing, is that by simply adding more iterations based on the current CPU speed or something? how does that work?
Was an admin account compromised in a situation where 2FA could have prevented the unauthorized access? If that's not what happened, then 2FA seems a bit hand wavy if it's not directly related to this security incident.
I wonder how many people send sensitive credentials or other operational details through Slack. It'd definitely be a target (along with mail systems) if you want to attack better-protected customer systems.
We've been waiting for over a year for Slack to create a self-hosted version that we can deploy to our intranet specifically because we can't expose ourself to things like this. They've kept insisting that it's around the corner but it doesn't seem to be happening. Hopefully this will spur them to prioritize self-hosted Slack.
Literally was arguing with someone like two days ago that using Slack for sensitive data was a bad idea, guaranteed to blow up in your face sooner or later.
We haven’t seen anything blow up yet other than Slack itself. While it sucks that the user table was compromised, I think the actual affect on businesses is more benign than they’d like to believe (ie: sensitive chatlogs that could be used for blackmailing).
given the use of bcrypt for password salt/hashing, I'd say any attacks from here may be targetted to specific users, or those with really weak passwords (top 10k password list) run through wouldn't take too long on a distributed cluster per user. How much that opens up, and how that corresponds or overlaps with slacks password requirements will vary.
Last year someone found a Slack security hole and was able to see all the companies using it and the room names, some of them pertaining to products under development.
People are suckers for still using this company's products.
Can we go back to IRC now, please! Slack is not only distracting, proprietary, but it is also pretty expensive. Let the mere mortals use it, but we should stay away!
[+] [-] FiloSottile|11 years ago|reply
https://blog.filippo.io/salt-and-pepper/
tl;dr: Hardcode a second salt in your application code or in an environment variable. Then a database dump is not enough anymore to do any kind of bruteforce.
It's simple, free and you can retroactively apply it.
EDIT: I addressed some of the points raised in this thread here https://blog.filippo.io/salt-and-pepper/#editedtoaddanoteonr...
[+] [-] stiff|11 years ago|reply
http://stackoverflow.com/questions/16891729/best-practices-s...
[+] [-] specialk|11 years ago|reply
The best article I've seen against this technique is by ircmaxell [0]. Nicely summed up in this sentence "It is far better to use standard, proven algorithms then to create your own to incorporate a pepper."
Anyone have source material (academic paper, Bruce "The Crypto God" Schneier blog post) that shreds some light on peppering passwords?
I'd be much more interested in how many iterations of bcryprt Slack were using. That has a much bigger bearing on events for me. Anyone at Slack know/want to answer that question?
[0] http://blog.ircmaxell.com/2012/04/properly-salting-passwords...
[+] [-] EugeneOZ|11 years ago|reply
Otherwise such advices should be ignored.
[+] [-] ntucker|11 years ago|reply
The upshot is that if you get a db dump and are able to brute force some bcrypt hashes, you won't know what usernames they go with. If you get a db dump and our source code, you're still out of luck. If you get ahold of an old server hard drive, you're out of luck. If you root a running server and inspect the process memory, you can obtain this key.
This scheme also allows the mapping key to be rolled, which would immediately invalidate all passwords in the system.
*we also version our password hash records so we can migrate from bcrypt to a new scheme fairly painlessly if it's warranted in the future.
[+] [-] randunel|11 years ago|reply
Later edit: I'm referring to your example in your link:
How do you retroactively apply this?[+] [-] mbrubeck|11 years ago|reply
[+] [-] ghshephard|11 years ago|reply
[+] [-] zaroth|11 years ago|reply
[+] [-] scott_karana|11 years ago|reply
[+] [-] atmosx|11 years ago|reply
That's a nice trick, I've read about it elsewhere but never used, will do for sure in the future!
Are you sure that the intruder did not had server access? I mean the info: "We were recently able to confirm that there was unauthorized access to a Slack database storing user profile information." is not enough to deduce that this was an SQL injection (although might very well be).
Stay strong :-)
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] Sami_Lehtinen|11 years ago|reply
[+] [-] bitsweet|11 years ago|reply
This wouldn't be my first concern. It would be all of the confidential communication that happens within slack.
[+] [-] justinv|11 years ago|reply
Under their FAQ on the post. It could be inferred that there was some unauthorized access to certain users' communication logs?
[+] [-] ihodes|11 years ago|reply
[+] [-] jxm262|11 years ago|reply
[+] [-] fillskills|11 years ago|reply
[+] [-] serve_yay|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] elchief|11 years ago|reply
* Use http://en.wikipedia.org/wiki/Database_activity_monitoring. If you don't list users on your site and you get a query that would return more than one user record, it's a hacker
* Add some http://en.wikipedia.org/wiki/Honeytoken s to your user table, and sound the alarm if they leave your db
* Use Row-Level Security
* Database server runs on own box in own network zone
* Send logs via write-only account to machine in different network zone. Monitor logs automatically, and have alerts.
* Pepper your passwords (HMAC them with a key in an HSM on the web server (then bcrypt). Don't store key in db). https://blog.mozilla.org/webdev/2012/06/08/lets-talk-about-p...
* Use a WAF that looks for SQL injections
* [Use real database authentication, per user. Not one username for everyone connecting to db. Yes, this is bad for connection pooling]
[+] [-] azinman2|11 years ago|reply
[+] [-] _trb_|11 years ago|reply
Many of the comments have great suggestions. However, very few talk about the most important part of creating mitigations and designing key management/crypto. What is the security target?
Before throwing new designs at a problem, the attackers and attack vectors must be defined. If you don't know who you are guarding against and what they will do (and what data they will steal), then how can you possibly say what is a good mitigation??
One might argue that the threat is obvious, but I'll guarantee you that there are dozens of threats here. List them. Prioritize them. Then mitigate them. It is helpful to fully understand the problem/solution space before jumping in with pepper's, salt's, extra databases, and solutions.
[+] [-] rudolf0|11 years ago|reply
Regardless, this is an example of why cloud communication (and ticketing and database off-loading [see MongoHQ] and...) systems probably won't ever become commonplace in most of the government space and the finance and health sectors.
[+] [-] jedberg|11 years ago|reply
[+] [-] kendallpark|11 years ago|reply
I agree. We might not like rolling out our own instances, but it prevents hackers from being able to grab ALL THE DATA in one fell swoop. It really amazes me that some EHR systems have gone the cloud route.
[+] [-] omgitstom|11 years ago|reply
[+] [-] count|11 years ago|reply
[+] [-] rattray|11 years ago|reply
[+] [-] nvk|11 years ago|reply
There is no reason why you can't take 10min to setup a IRC with SSL on your own.
Yes, Slack is awesome, lots of features, but it's not yours!
[+] [-] mirashii|11 years ago|reply
[+] [-] chatmasta|11 years ago|reply
This is a valid criticism. Slack can do all it can to mitigate security risk. But at the end of the day, there is always at least one vulnerability, somewhere.
As Slack matures as a company, it needs an answer to this criticism. Because security is so naturally unpredictable, it would be disingenuous for Slack to respond with anything resembling "our security is perfect." Because, of course, as we see time and time again, no security is perfect.
Now that Slack has captured the low-hanging-fruit of the market, it needs to pick the high-hanging-fruit. The most profitable clients for slack will be the largest, conservative, enterprise clients who will join the Slack platform and then never leave. The long term survivability prospects of Slack depend on capturing these large enterprise customers.
Strategically, Slack needs to find a response to the criticism that its security is prohibitively weak, so that it can convince these large enterprises to join its platform.
Perhaps, the best response to security criticism is that "we got hacked, but our internal policies mitigated any cascading effects and customer data remains safe." [0] [1] So would it be in Slack's best interest to stage a hack on itself? Or to report a hack occurred when it really didn't?
It seems feasible that by setting precedent for its reaction to a hack, Slack has a chance to demonstrate the competence of its security team. Now investors can point to this incident as one handled well by the security team. In a world where, unfortunately, corporations will always get hacked, Slack was able to survive with some dignity.
[0] or, as safe as it can possibly be according to computer science.
[1] debatable.
[+] [-] matdrewin|11 years ago|reply
Passwords are not the only sensitive info that can be stored in a database and most of the time, that info isn't hashed.
[+] [-] zuck9|11 years ago|reply
[+] [-] kstop|11 years ago|reply
[+] [-] racontour|11 years ago|reply
[+] [-] jturolla|11 years ago|reply
[+] [-] eranation|11 years ago|reply
I'm happy to hear they didn't just use MD5 with no salt as this would be the same as storing it in plane text...
bcrypt + random salt sounds to me like the best practice nowadays, is it still holding? or are there some advanced in GPU cluster costs on EC2 that make even bcrypt hackable. I think I heard something that it has a way to "adapt" to advances in computing, is that by simply adding more iterations based on the current CPU speed or something? how does that work?
[+] [-] Klinky|11 years ago|reply
[+] [-] rdl|11 years ago|reply
[+] [-] colordrops|11 years ago|reply
[+] [-] ocdtrekkie|11 years ago|reply
Nothing sweeter than "I told you so".
[+] [-] eswat|11 years ago|reply
[+] [-] tracker1|11 years ago|reply
[+] [-] Dirlewanger|11 years ago|reply
People are suckers for still using this company's products.
[+] [-] larsiusprime|11 years ago|reply
> Download and install either the Google Authenticator or Duo Mobile apps on your phone or tablet.
Hey Slack, I don't have a smartphone. What am I supposed to do?
[+] [-] kolev|11 years ago|reply
[+] [-] matthewmacleod|11 years ago|reply
[+] [-] kordless|11 years ago|reply