"Applications should enforce password complexity rules to discourage easy to guess passwords." - ARGH!
To clarify, to avoid downvotes for a non-'productive' comment, I firmly disagree since this will probably result in me having to pick a password that's harder to remember than I otherwise would. It might also might it more awkward to type quickly, making shoulder-surfing easier.
(Note that this is probably not i18n-friendly, either)
At my company this typically results in users printing or writing the password down.
I wonder if the guys who create these heuristics/recommendations ever had contact with humans. I believe that their research is thorough in their area of expertise (info sec), however, it sounds like they are only considering data per se, ignoring human behavior variables. There's little value in enforcing hard-to-break passwords while also encouraging users to write them down.
What I believe: Info sec researchers should team up with HCI people.
No link handy but I'm sure someone recently wrote an open source effective entropy calculator, which avoids either enforcing stupid entropy-reducing, complexity-increasing rule sets, while also preventing anyone using stupidly easy to guess passwords. Others do things like check against a normalised dictionary (i.e., I, 1 or L are all considered the same character). I really don't know why in this day and age of complex software with libraries for even trivial functionality, this practise isn't more widespread. People are still writing dumb complexity rules like it's 1980. This should really be the new "don't do your own crypto".
Having recently had to implement improved password security for a customer who wouldn't read XKCD #936, the internationalization thing was a pain in the ass.
They wanted at least 10 characters, and at least one uppercase, one lower, one digit, and one special character. Easy enough with .NET's built-in membership stuff by setting:
The "\p{Lu}" part handles uppercase characters even in Unicode chars, but Javascript has no equivalent, so I couldn't do client-side validation of that. Should be validating on both ends anyway, but it's still a pain.
The real part I hated was having to keep track of users' last N passwords to make sure they didn't re-use them. Since everything's hashed and salted, I just kept a table of previous hashes by user. Seems simple, but MS didn't see fit to include a HashPassword(string plainTextPassword, byte[] userSalt) method in the membership provider, so I had to reverse engineer their password-hashing method to check when they change passwords if it's something that's been used before.
Then I realized that they could just change their password N+1 times in about a minute, then re-use their expired password anyway, so we wound up having to set a minimum age of N weeks before a password could be reused as well.
The whole problem is an exploding requirements nightmare that could easily be solved by saying "Must be >32 characters and don't write it down anywhere, idiot."
The worst part is as much as I hate these types of requirements, I now perfectly understand why these systems are the way that they are.
I think context is important. If it's a bank password, I don't mind coming up with something complex. If it's for something that requires me to have an account, but I don't care at all whether or not someone else gains access to the account, I have no problem using a much simpler password and would be rather annoyed if I had to do something more complex.
I agree on that as well. Enforced - NO (only minimum length should be enforced imho). Encouraged - Yes. Just show a good password strength meter to the user and fine-tune it to your security requirements.
If you enforce all kinds of weird password rules to the user, he will have to write the password down somewhere, because one couldn't possible remember all passwords. And for non-technical users that means some random pieces of paper or post-it notes. On the other hand, encouraging them to come up with something that is strong makes it more likely that they will invent a password that they can remember, thus making it more secure.
A personal pet peeve of mine is websites that happily inform me my 20-character long, randomly generated by a secure algorithm with an extra heaping of entropy, password is not "strong enough."
I just love generating 3 or 4 different COMPLETELY RANDOM passwords with KeePass because your stupid password rules were written by people who wouldn't know entropy if a dictionary open to 'en' hit them square in the jaw.
I see your point, but given the number of 'test', 'password' and '12345' passwords we see whenever there's a leak, that could be an issue.
Maybe just a minimum length? I too get annoyed when there are specific complexity requirements, like 'must include an uppercase letter' even though I've used a 20-character long password including numbers and punctuation.
While in some ways I agree that these password rules enforcements are not ideal (they are hardly scientific, they seem much more "common sense" to me), I think making you use a harder-to-remember password is part of the point. The current received wisdom on passwords is that if it's easy to remember, it's easy to guess.
That said, I'm not crazy about the inherent paternalism of this sort of thing. I think allowing weak passwords with a warning is, in most instances (not including corporate IT and situations where you're requiring that the user protect your private information rather than their own), a preferred way to go. Informing people when they are making a weak password should at the very least let them make a choice about how much they care about their own security.
At Lavaboom, we simply check against the 10k most used passwords (in memory), but we plan to move to 1 million soon (on disk – account creation is relatively infrequent for the slower access speed to not matter).
Our problem is that we SHA the passwords on the client side, so each password is 256 bits long. The resulting hashtable (or bloom filter) is still a reasonable size for disk storage, though.
It depends on your threat model of course, but in most situations an attacker that is physically near you is less concerning than an attacker that can be anywhere in the world.
In most cases I'd rather you had a twelve character random string written on a post it attached to your monitor than the password "password" not written down anywhere.
It is more palatable if the rules provide for phrases, say 20 characters or longer. Then you can do a few substitutions for word separators and the like.
If you answer to any kind of external or security compliance regime, that compliance is usually built around NIST guidance. They are big about strong passwords and MFA.
Exactly. If my password is 20 characters, why is it more secure to have it be 10 characters with using upper, numbers, and symbols? (ie: 26^20 >> 95^10).
"An application should respond with a generic error message regardless of whether the user ID or password was incorrect."
I really don't like this advice (although I see why they put it in there).
I often use different email addresses for different services so that I can determine who sells on email addresses (depending on how much I trust them), and quite often I can't remember which email address I signed up with (was that [email protected] or [email protected]).
At least if I see "user not recognised", I know to try a different email address.
There was a decent article (I think it was on HN) a while ago that argued against this type of generic error message. The basic idea is that you can very easily discover whether the email is valid or not by attempting to create an account with that email (in most cases). It's trivially easy to either verify that the email you are trying to use is valid, or even build a database of valid email addresses to crack by attempting to create accounts. So why bother with generic error messages at all. It is not really buying you anything on the security end and it seems like it is sacrificing some usability.
Agreed. I actually just fought and won this battle at work. If you don't want to expose the specific error when logging in, you must either not leak usernames/emails through the signup process either. Otherwise, it's just security theater.
So just keep trying addresses until you get a reset link sent to you. It's really unacceptable for any service to leak its user list in the way you suggest.
EDIT: x1j7xJuzX in the sibling subthread has it right for email addresses. It's true that separate usernames would be difficult to handle in a user-friendly manner without leaking, but with a valid email address a separate username is probably unnecessary. It doesn't help the user interact with the site. If users interact with each other, they can just choose non-unique display names. To prevent impersonation, just use display name plus some other invariant account property to generate a hash that is displayed alongside the display name.
"Maximum password length should not be set too low, as it will prevent users from creating passphrases. Typical maximum length is 128 characters."
Why would you ever have a maximum password length at all? bcrypt or (god forbid) your secure hashing algorithm of choice doesn't care about input length, and has a fixed output length to stick in a database. Why on earth would you limit the password length beyond anything so insanely large (1024, etc) to not even matter?
To a user it may not matter (they won't know what is being truncated) but from a systems design POV you should limit the unnecessary. Why let users POST 1MB text strings to your server if you're just going to discard them?
Such a low maximum length does not make a lot of sense, but say a limit of 1024/2048 seems reasonable. The amount of time it takes to compute a hash is proportional to the length of the input and you do not want to facilitate a DOS attack.
Every time you introduce a password constraint, you've reduced the potential password complexity. I absolutely hate arbitrary password requirements. "not more than 2 identical characters in a row"? WTF? Stop with this nonsense.
"not more than 2 identical characters in a row"? WTF? Stop with this nonsense.
This is OT, but there's an interesting snippet in "The Secret Life of Bletchley Park" [1] about decoding Enigma messages used by the Italian Navy in the Med.
One of the female operators had a set of messages from one Italian operator who sent a message once a week on a regular basis. They had determined that the first letter was an 'L'. She looked at the keyboard, saw that 'L' was neatly placed under the right hand and guessed that he was sending a test message consisting of nothing but 'L's tapped out in quick succession. Voila! She hit the jackpot.
From this insight, all dial wirings and movements of the Italian machines could be quickly deduced.
So, repetitive plain text can be a security issue.
A big one I've seen is more related to the TLS cheat sheet [1] they link to on that page.
Many sites will send session tokens over http because they don't set the "secure" cookie flag. It's a simple thing to do, and prevents a malicious ARP poison or DNS attack from potentially hijacking an account.
You'd be surprised how many sites are vulnerable to such attacks. Reddit, parts of Ebay, several university websites, and many other sites still are vulnerable to session hijacking.
I think people writing web libraries need to start building "sane defaults" concerning security. All cookies should be secure by default, and only those who know what they are doing should turn them off. It's not that much extra overhead, and the potential benefits outweigh the increased processing and bandwidth.
Great point. Setting "SECURE" and the poorly named "HTTP" are key to cookie security.
One issue we ran into was: So our site runs behind a load balancer. We receive HTTPS connections into the load balancer but the internal connection between the load balancer and the actual websites was HTTP only, so when we tried to set SECURE on the cookies, the application framework we were using trying to be "helpful" unset the SECURE flag because it detected that the connect from its perspective was not secure (even though from the browser's perspective it was).
Keep in mind that the connection between load balancer and web-servers was never on the internet, in fact it never left a virtual machine farm (a single room essentially). So it is justifiable doing HTTP internally and HTTPS externally (and also makes certificate management easier).
We finally had to hack away a bit on the framework to get it to set secure regardless of the connection type.
> Many sites will send session tokens over http because they don't set the "secure" cookie flag. It's a simple thing to do, and prevents a malicious ARP poison or DNS attack from potentially hijacking an account.
Or you can of course enforce HSTS so that HTTP never gets used.
Some of the suggestions are bad. Why they are enforcing English characters? Like a-z? For example in Github I write щ and then it wants me to write a lowercase letter. WTF? It is lowercase! And more secure than an English letter!
Do people REALLY brute force passwords? Do people REALLY brute force all lowercase, all latin combinations up to 20 characters before trying symbols, uppercase and numbers?
I am very skeptical that the '3/4 complexity rules' approach is making systems meaningfully more secure. I've had all kinds of passwords, but I've never lost them to brute force. Every time it was because someone got inside a company and made off with the database.
If complexity rules don't add anything, they should be discarded in the name of usability.
Just a hypothetical, but what if an application started encouraging users to enter a "login sentence" instead of a password. i.e.: "Please enter a sentence that you'll be asked to remember each time you login." Obviously, the standard constraints of length and complexity (albeit slightly altered) can be enforced.
It's much easier for me to remember "Please close the window, I'm cold." then it is for me to remember "XSDJd94*(lo03X.._".
The application may return a different HTTP Error code depending on the authentication attempt response. It may respond with a 200 for a positive result and a *403* for a negative result.
I would say a 401 - Unauthorized with proper WWW-Authenticate header.
403 means forbidden, which apply to when you try to access a resource without permission / authorization
Select:
PBKDF2 [*4] when FIPS certification or enterprise support on many platforms is required;
scrypt [*5] where resisting any/all hardware accelerated attacks is necessary but support isn’t.
bcrypt where PBKDF2 or scrypt support is not available.
Simplicity should be a primary goal in the methods used to protect systems. Just because the methods to protect are easy, doesn't mean its easy to crack. For instance, a decent size password and lockout and you're set as far as brute force attacks. They are not going to guess a 10 letter password in 5 tries. After x tries, make them reset. Two factor auth for really important stuff, isn't that pretty much it.
I believe we're seeing more successful attacks from the use of security techniques that are unnecessarily complex and not completely understood (or partially implemented) by most engineers - than cause passwords aren't long enough.
Password complexity rules are stupid. The only thing that matters is the total entropy. "Entropy too low" is the only error a user should receive when coming up with a password.
Those complexity rules are the result of an entire industry blindly following the best practices of an old unix DES crypt function. It's dumb and it should stop.
He forgot an important modern rule on authentication: don't do it.
If you can get another system to do it for you; persona, OpenID, Github, Google, Facebook, or twitter it's more secure for the end user. They have features such as two factor authentication, fraud detection, manage password resets for you, and the end user is more likely already have an account.
Many developers don't agree with this on a moral level, as you are giving power to third party. However developers are developers, and if you do it yourself you're bound to do at least one thing wrong.
A problem here where I work is that every application must have a different password and it must change every 90 days. Consequently everyone has a spreadsheet with his passwords written down because nobody could possibly remember them all.
It seems to me that with 2FA, one simple password is adequate. Two independent devices need to be compromised and brute force is ineffective since the turn around time is at least several seconds between tries.
This doesn't touch on commercial authentication managers and how horribly they can be implemented. There's no authorization cheat sheet either.
They also make assumptions like "When multi-factor is implemented and active, account lockout may no longer be necessary." Sure, until someone finds a simple hole in one of the factors and the rest become trivially brute-forced, sniffed, phished, etc. The chain is only as strong as the weakest link.
I dont really like this page. Its a good effort.. but. (no ands!).
- Most things are just a flyby, such as "hey look heres a paragraph that tells your what MFA is". but doesnt tell you how to use it.
- Password rules are outdated "use caps, 10 char, numbers, etc!". The horse staple blabla has been the new standard for yearS now... and is way better.. generating the password for the user is often not a bad idea
This is not a accusing comment, but more of a request for more information:
"Passphrases shorter than 20 characters are usually considered weak if they only consist of lower case Latin characters."
This goes against the concept of diceware generated passwords of 4-6 short words doesn't it? Where in this equation am I getting it wrong? I've been approaching passwords like this for a while now.
[+] [-] oneeyedpigeon|11 years ago|reply
To clarify, to avoid downvotes for a non-'productive' comment, I firmly disagree since this will probably result in me having to pick a password that's harder to remember than I otherwise would. It might also might it more awkward to type quickly, making shoulder-surfing easier.
(Note that this is probably not i18n-friendly, either)
[+] [-] paulojreis|11 years ago|reply
I wonder if the guys who create these heuristics/recommendations ever had contact with humans. I believe that their research is thorough in their area of expertise (info sec), however, it sounds like they are only considering data per se, ignoring human behavior variables. There's little value in enforcing hard-to-break passwords while also encouraging users to write them down.
What I believe: Info sec researchers should team up with HCI people.
[+] [-] robert_tweed|11 years ago|reply
[+] [-] DoggettCK|11 years ago|reply
They wanted at least 10 characters, and at least one uppercase, one lower, one digit, and one special character. Easy enough with .NET's built-in membership stuff by setting:
passwordStrengthRegularExpression="(?=.{10,})(?=(.\p{Lu}){1,})(?=(.\d){1,})(?=(.*\W){1,})"
The "\p{Lu}" part handles uppercase characters even in Unicode chars, but Javascript has no equivalent, so I couldn't do client-side validation of that. Should be validating on both ends anyway, but it's still a pain.
The real part I hated was having to keep track of users' last N passwords to make sure they didn't re-use them. Since everything's hashed and salted, I just kept a table of previous hashes by user. Seems simple, but MS didn't see fit to include a HashPassword(string plainTextPassword, byte[] userSalt) method in the membership provider, so I had to reverse engineer their password-hashing method to check when they change passwords if it's something that's been used before.
Then I realized that they could just change their password N+1 times in about a minute, then re-use their expired password anyway, so we wound up having to set a minimum age of N weeks before a password could be reused as well.
The whole problem is an exploding requirements nightmare that could easily be solved by saying "Must be >32 characters and don't write it down anywhere, idiot."
The worst part is as much as I hate these types of requirements, I now perfectly understand why these systems are the way that they are.
[+] [-] ssharp|11 years ago|reply
[+] [-] M4v3R|11 years ago|reply
If you enforce all kinds of weird password rules to the user, he will have to write the password down somewhere, because one couldn't possible remember all passwords. And for non-technical users that means some random pieces of paper or post-it notes. On the other hand, encouraging them to come up with something that is strong makes it more likely that they will invent a password that they can remember, thus making it more secure.
[+] [-] slavak|11 years ago|reply
I just love generating 3 or 4 different COMPLETELY RANDOM passwords with KeePass because your stupid password rules were written by people who wouldn't know entropy if a dictionary open to 'en' hit them square in the jaw.
[+] [-] matthewmacleod|11 years ago|reply
Maybe just a minimum length? I too get annoyed when there are specific complexity requirements, like 'must include an uppercase letter' even though I've used a 20-character long password including numbers and punctuation.
[+] [-] stouset|11 years ago|reply
If a password is easy to remember, it is easy to guess, and if you reuse a password it's likelihood of being compromised increases dramatically.
There is no simple solution for this problem. Password managers make the best of a crappy and likely unavoidable situation.
[+] [-] x1798DE|11 years ago|reply
That said, I'm not crazy about the inherent paternalism of this sort of thing. I think allowing weak passwords with a warning is, in most instances (not including corporate IT and situations where you're requiring that the user protect your private information rather than their own), a preferred way to go. Informing people when they are making a weak password should at the very least let them make a choice about how much they care about their own security.
[+] [-] simi_|11 years ago|reply
Our problem is that we SHA the passwords on the client side, so each password is 256 bits long. The resulting hashtable (or bloom filter) is still a reasonable size for disk storage, though.
[+] [-] bradleyjg|11 years ago|reply
In most cases I'd rather you had a twelve character random string written on a post it attached to your monitor than the password "password" not written down anywhere.
[+] [-] wglb|11 years ago|reply
And yes, I agree that it is seriously annoying.
[+] [-] Spooky23|11 years ago|reply
[+] [-] darkarmani|11 years ago|reply
[+] [-] beobab|11 years ago|reply
I really don't like this advice (although I see why they put it in there).
I often use different email addresses for different services so that I can determine who sells on email addresses (depending on how much I trust them), and quite often I can't remember which email address I signed up with (was that [email protected] or [email protected]).
At least if I see "user not recognised", I know to try a different email address.
[+] [-] danw3|11 years ago|reply
[+] [-] birdmanjeremy|11 years ago|reply
[+] [-] jessaustin|11 years ago|reply
EDIT: x1j7xJuzX in the sibling subthread has it right for email addresses. It's true that separate usernames would be difficult to handle in a user-friendly manner without leaking, but with a valid email address a separate username is probably unnecessary. It doesn't help the user interact with the site. If users interact with each other, they can just choose non-unique display names. To prevent impersonation, just use display name plus some other invariant account property to generate a hash that is displayed alongside the display name.
[+] [-] shogun21|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] billyhoffman|11 years ago|reply
Why would you ever have a maximum password length at all? bcrypt or (god forbid) your secure hashing algorithm of choice doesn't care about input length, and has a fixed output length to stick in a database. Why on earth would you limit the password length beyond anything so insanely large (1024, etc) to not even matter?
[+] [-] marklit|11 years ago|reply
[+] [-] Rican7|11 years ago|reply
Example: http://3v4l.org/4lGu3
[+] [-] bascule|11 years ago|reply
https://www.usenix.org/legacy/events/usenix99/provos/provos_...
[+] [-] elithrar|11 years ago|reply
bcrypt itself accepts a maximum key size of 56/72 bytes (depending on stage) as per http://en.wikipedia.org/wiki/Bcrypt#User_input
To a user it may not matter (they won't know what is being truncated) but from a systems design POV you should limit the unnecessary. Why let users POST 1MB text strings to your server if you're just going to discard them?
[+] [-] tokenizerrr|11 years ago|reply
[+] [-] boobsbr|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] birdmanjeremy|11 years ago|reply
[+] [-] kitd|11 years ago|reply
This is OT, but there's an interesting snippet in "The Secret Life of Bletchley Park" [1] about decoding Enigma messages used by the Italian Navy in the Med.
One of the female operators had a set of messages from one Italian operator who sent a message once a week on a regular basis. They had determined that the first letter was an 'L'. She looked at the keyboard, saw that 'L' was neatly placed under the right hand and guessed that he was sending a test message consisting of nothing but 'L's tapped out in quick succession. Voila! She hit the jackpot.
From this insight, all dial wirings and movements of the Italian machines could be quickly deduced.
So, repetitive plain text can be a security issue.
[1] http://www.amazon.com/The-Secret-Life-Bletchley-Park/dp/1845...
[+] [-] pricechild|11 years ago|reply
[+] [-] AUmrysh|11 years ago|reply
Many sites will send session tokens over http because they don't set the "secure" cookie flag. It's a simple thing to do, and prevents a malicious ARP poison or DNS attack from potentially hijacking an account.
You'd be surprised how many sites are vulnerable to such attacks. Reddit, parts of Ebay, several university websites, and many other sites still are vulnerable to session hijacking.
I think people writing web libraries need to start building "sane defaults" concerning security. All cookies should be secure by default, and only those who know what they are doing should turn them off. It's not that much extra overhead, and the potential benefits outweigh the increased processing and bandwidth.
1: https://www.owasp.org/index.php/Transport_Layer_Protection_C...
[+] [-] Someone1234|11 years ago|reply
One issue we ran into was: So our site runs behind a load balancer. We receive HTTPS connections into the load balancer but the internal connection between the load balancer and the actual websites was HTTP only, so when we tried to set SECURE on the cookies, the application framework we were using trying to be "helpful" unset the SECURE flag because it detected that the connect from its perspective was not secure (even though from the browser's perspective it was).
Keep in mind that the connection between load balancer and web-servers was never on the internet, in fact it never left a virtual machine farm (a single room essentially). So it is justifiable doing HTTP internally and HTTPS externally (and also makes certificate management easier).
We finally had to hack away a bit on the framework to get it to set secure regardless of the connection type.
[+] [-] noinsight|11 years ago|reply
Or you can of course enforce HSTS so that HTTP never gets used.
[+] [-] pc86|11 years ago|reply
Why? If my password is id8FK38f@&&#d is it inherently less secure if 111 appears in the middle of it somewhere?
[+] [-] darkhorn|11 years ago|reply
[+] [-] samspot|11 years ago|reply
I am very skeptical that the '3/4 complexity rules' approach is making systems meaningfully more secure. I've had all kinds of passwords, but I've never lost them to brute force. Every time it was because someone got inside a company and made off with the database.
If complexity rules don't add anything, they should be discarded in the name of usability.
[+] [-] cddotdotslash|11 years ago|reply
It's much easier for me to remember "Please close the window, I'm cold." then it is for me to remember "XSDJd94*(lo03X.._".
The "horse battery staple" XKCD comes to mind.
[+] [-] Karunamon|11 years ago|reply
ARGH. This is a usability nightmare - moreso when the recovery system implements the same rule.
"Okay, I had an account on this website, which email address was it again?"
try logging in a few times
"Hm.. I must have forgotten the password. Off to reset!"
go through the recovery process
recovery page indicates an email will be sent
email never comes
"Wait, so are they being 'really secure', or is email just broken right now?"
wait a couple hours
forget about the site
[+] [-] bohinjc|11 years ago|reply
403 means forbidden, which apply to when you try to access a resource without permission / authorization
Also, in their Password Storage Cheat Sheet [https://www.owasp.org/index.php/Password_Storage_Cheat_Sheet], they seems to recommend :
AFAIK, things are not so binary :* https://news.ycombinator.com/item?id=3724560
* http://security.stackexchange.com/questions/4781/do-any-secu...
* http://security.stackexchange.com/questions/26245/is-bcrypt-...
[+] [-] dpweb|11 years ago|reply
I believe we're seeing more successful attacks from the use of security techniques that are unnecessarily complex and not completely understood (or partially implemented) by most engineers - than cause passwords aren't long enough.
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] snarfy|11 years ago|reply
Those complexity rules are the result of an entire industry blindly following the best practices of an old unix DES crypt function. It's dumb and it should stop.
http://security.stackexchange.com/questions/33470/what-techn...
[+] [-] pippy|11 years ago|reply
If you can get another system to do it for you; persona, OpenID, Github, Google, Facebook, or twitter it's more secure for the end user. They have features such as two factor authentication, fraud detection, manage password resets for you, and the end user is more likely already have an account.
Many developers don't agree with this on a moral level, as you are giving power to third party. However developers are developers, and if you do it yourself you're bound to do at least one thing wrong.
[+] [-] Gargoyle888|11 years ago|reply
A problem here where I work is that every application must have a different password and it must change every 90 days. Consequently everyone has a spreadsheet with his passwords written down because nobody could possibly remember them all.
It seems to me that with 2FA, one simple password is adequate. Two independent devices need to be compromised and brute force is ineffective since the turn around time is at least several seconds between tries.
[+] [-] peterwwillis|11 years ago|reply
They also make assumptions like "When multi-factor is implemented and active, account lockout may no longer be necessary." Sure, until someone finds a simple hole in one of the factors and the rest become trivially brute-forced, sniffed, phished, etc. The chain is only as strong as the weakest link.
[+] [-] zobzu|11 years ago|reply
- Most things are just a flyby, such as "hey look heres a paragraph that tells your what MFA is". but doesnt tell you how to use it.
- Password rules are outdated "use caps, 10 char, numbers, etc!". The horse staple blabla has been the new standard for yearS now... and is way better.. generating the password for the user is often not a bad idea
- no mention of upcoming techs like FIDO/U2F
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] sekasi|11 years ago|reply
"Passphrases shorter than 20 characters are usually considered weak if they only consist of lower case Latin characters."
This goes against the concept of diceware generated passwords of 4-6 short words doesn't it? Where in this equation am I getting it wrong? I've been approaching passwords like this for a while now.