This is pretty incredible. These aren't just good practices, they're the fairly bleeding edge best practices.
1. No more SMS and TOTP. FIDO2 tokens only.
2. No more unencrypted network traffic - including DNS, which is such a recent development and they're mandating it. Incredible.
3. Context aware authorization. So not just "can this user access this?" but attestation about device state! That's extremely cutting edge - almost no one does that today.
My hope is that this makes things more accessible. We do all of this today at my company, except where we can't - for example, a lot of our vendors don't offer FIDO2 2FA or webauthn, so we're stuck with TOTP.
I think 3. is very harmful for actual, real-world use of Free Software. If only specific builds of software that are on a vendor-sanctioned allowlist, governed by the signature of a "trusted" party to grant them entry to said list, can meaningfully access networked services, all those who compile their own artifacts (even from completely identical source code) will be excluded from accessing that remote side/service.
Banks and media corporations are doing it today by requiring a vendor-sanctioned Android build/firmware image, attested and allowlisted by Google's SafetyNet (https://developers.google.com/android/reference/com/google/a...), and it will only get worse from here.
Remote attestation really is killing practical software freedom.
Also, “Password policies must not require use of special characters or regular rotation.”
They even call out the fact that it's a proven bad practice that leads to weaker passwords - and such policies must be gone from government systems in 1 year from publication of the memo. It's delightful.
By the time we implement any of these things, if ever, they certainly won't be. I work on military networks and applications, and it's hard for me to believe that I'll see any of this within my career at the pace we move. This is the land of web applications that only work with Internet Explorer, ActiveX, Siverlight, Flash, and Java Applets, plus servers running Linux 2.6 or Windows Server 2012.
The idea of "Just-in-Time" access control where "a user is granted access to a resource only while she needs it, and that access is revoked when she is done" is terrifying when it takes weeks or months to get action on support tickets that I submit (where the action is simple, and I tee it up with a detailed description of whatever I need done).
We've been building to these goals at bastionzero so I've been living it everyday, but I feels validating and also really strange to see the federal government actually get it.
Force banks to do this, immediately. They can levy it on any organization with a banking license or wants access to FEDWire or the ACH system. Force it for SWIFT access too, if the bank has an online banking system for users.
These are strong requirements, but I fear the government just wants more transparency of citizens. Remote-attestation of trusted platforms could lead to the worst surveillance attempts we have ever seen. And it would require you to trust your government. That is a bad idea from a security point of view.
edit: The source of my claim that governments tend to extend surveillance is pretty well documented I believe. So much so that I believe it is worthy to insert the problem into debates about anything relating to security. Because security often serves as the raison d'être for such ambitions.
SMS are bad due to MITM and SIM cloning. In EU many banks still use smsTAN, and it leads to lots of security breaches. It's frustrating some don't offer any alternatives.
However, is FIDO2 better than chipTAN or similar? I like simple airgapped 2FAs, but I'm not an expert.
Given that every cybersecurity czar seems to publicly resign a few months after being appointed, what are the chances of these actually being implemented?
The real crux of the issue is the long-tail of applications which were never conceived with anything but network-based trust. I'm certain the DoD is absolutely packed with these, probably for nearly every workflow.
The reason this was so "easy" for Google (and some other companies, like GitLab[1]) to realize most of these goals is that they are a web-based technology company - fundamentally the tooling and scalable systems needed to get started were web so the transition were "free". Meaning, most of the internal apps were HTTP apps, built on internal systems, and the initial investment was just to make an existing proxied internal service, external and behind a context aware proxy [1].
The hard part for most other companies (and the DoD) is figuring out what to do with protocols and workflows that aren't http or otherwise proxyable.
Many workflows are proxyable using fine grained IP-level or TCP-level security. (I believe that Tailscale does more or less this.). This can’t support RBAC or per-user dynamic authentication particularly well, but it can at least avoid trusting an entire network.
Our corporate IT folks have a Zero Trust Manifesto in which there are enrolled devices (laptops that remote SREs can carry around), there are enterprise applications, and there's connectivity between these (e.g., tunnel pairs). SREs often need to write scripts that operate on sensitive production data from the enterprise applications, but must do this work directly on an enrolled device.
Pre-Zero-Trust days seemed safer. Copying production data to a laptop wasn't allowed. Instead, each SRE had their own Linux VM in the data center, accessible from home and able to run the scripts (with connectivity to the enterprise application). This prevented a whole class of realistic attacks in which a laptop (while unlocked/decrypted) is taken by an adversary. Admittedly, in return, we're protected from a possible, but less likely, attack in which a Linux VM is compromised and used for lateral movement within one segment of the enterprise network. (An enrolled device has to be in the user's possession; it can't be any machine, Linux or Windows, in the data center or office.)
The only people who love this are our enterprise application vendors. Our bosses are paying them a TON more money to implement new requirements where, in theory, all possible types of data analysis can be done directly within the enterprise application. No more scripts, no more copying of data. No more use of Open Source. And, of course, people from these same enterprise application vendors advise the government that Zero Trust must be a top priority mandate.
Really pleasantly surprised at how progressive this memo is. It will be interesting to see the timelines put in place to make the transition.
Btw - I'd love to see the people who put this memo together re-evaluate the ID.me system they're implementing for citizens given how poor the identity verification is.
TOTP is not going anywhere for much of the Internet. Hold on while I get a Yuibikey to my dad who thinks "folders can't be in other folders" because that's not how they work in real life.
TOTP is a great security enhancement, and while phishable, considerably raises the bar for an attacker.
The fact that TOTP is mentioned as a bad practice in this document is an indicator that this should not be considered a general best practices guide. It is a valid best practice guide for a particular use case and particular user base.
> Today’s email protocols use the STARTTLS protocol for encryption; it is laughably easy to do a protocol downgrade attack that turns off the encryption.
This can be solved with DANE, which is based on DNSSEC. When properly configured, the sending mailserver will force the use of STARTTLS with a trusted certificate. The STARTTLS+DANE combination has been a mandatory standard for governmental organizations in the Netherlands since 2016.
I’m somewhat unhappy the “zero trust” terminology ha caught on. The technology is fine, but trust is an essential concept in many parts of life[0], and positioning it as something to be avoided or abolished will just further erode the relationships that define a peaceful and civil society.
0: trade only works if the sum of your trust in the legal system, intermediates, and counterparts reaches some threshold. The same is true of any interaction where the payoff is not immediate and assured, from taxes to marriage and friendship, and, no, it is not possible to eliminate it, nor would that be a society you’d want to live in. The only systems that do not rely on some trust that the other person isn’t going to kill them are maximum-security prisons and the US president’s security bubble. Both are asymmetric and still require trust in some people, just not all.
> “discontinue support for protocols that register phone numbers for SMS or voice calls, supply one-time codes, or receive push notifications."
... necessarily means TOTP.
Could be argued "supply" means code-over-the-wire, so all 3 being things with a threat of MITM or interception: SMS, calls, "supply" of codes, or push. Taken that way, all three fail the "something I have" check. So arguably one could take "supply one-time codes" to rule out both what HSBC does, but also what Apple does pushing a one-time code displayed together with a map to a different device (but sometimes the same device).
I'd argue TOTP is more akin to an open soft hardware token, as after initial delivery it works entirely offline, and passes the "something I have" check.
>Meanwhile encryption with PGP has been a complete failure, due to problems with key distribution and user experience.
Encrypted messaging has been a complete failure; there is no need to single out email. I suspect the reason is more or less the same in all cases. Users have not been provided with a conceptual framework that would allow them to use the tools in a reasonable way. If the US federal government can come up with, and promote such a framework the world would become a different place.
BTW, the linked article is mostly based on misconceptions:
I wonder if the recommendation for context-aware auth also includes broader adoption of Impossible Travel style checks?
For context, Impossible Travel is typically defined as an absolute minimum travel time between two points based on the geographical distance between them, with the points themselves being derived from event-associated IPs via geolocation
The idea is that if a pair of events breaches that minimum travel time by some threshold, it's a sign of credential compromise; It's effective for mitigating active session theft, for example, as any out of region access would violate the aforementioned minimum travel time between locations and produce a detectable anomaly
This sounds really beautiful, and I am saving the link for future reference.
I'm curious about the DNS encryption recommendation. My impression was that DNSSEC was kind of frowned upon as doing nothing that provides real security, at least according to the folks I try to pay attention to. Are these due to differing perspectives in conflict, or am I missing something?
> Do not give long-lived credentials to your users.
This screams "we'll use more post-it notes for our passwords compared to before", or maybe the real world to which this memo is addressed is different compared to the real (work-related) world I know.
> “Enterprise applications should be able to be used over the public internet.”
Isn’t exposing your internal domains and systems outside VPN-gated access a risk? My understanding is this means internaltool.faang.com should now be publicly accessible.
[+] [-] staticassertion|4 years ago|reply
1. No more SMS and TOTP. FIDO2 tokens only.
2. No more unencrypted network traffic - including DNS, which is such a recent development and they're mandating it. Incredible.
3. Context aware authorization. So not just "can this user access this?" but attestation about device state! That's extremely cutting edge - almost no one does that today.
My hope is that this makes things more accessible. We do all of this today at my company, except where we can't - for example, a lot of our vendors don't offer FIDO2 2FA or webauthn, so we're stuck with TOTP.
[+] [-] c0l0|4 years ago|reply
Banks and media corporations are doing it today by requiring a vendor-sanctioned Android build/firmware image, attested and allowlisted by Google's SafetyNet (https://developers.google.com/android/reference/com/google/a...), and it will only get worse from here.
Remote attestation really is killing practical software freedom.
[+] [-] meepmorp|4 years ago|reply
They even call out the fact that it's a proven bad practice that leads to weaker passwords - and such policies must be gone from government systems in 1 year from publication of the memo. It's delightful.
[+] [-] warner25|4 years ago|reply
By the time we implement any of these things, if ever, they certainly won't be. I work on military networks and applications, and it's hard for me to believe that I'll see any of this within my career at the pace we move. This is the land of web applications that only work with Internet Explorer, ActiveX, Siverlight, Flash, and Java Applets, plus servers running Linux 2.6 or Windows Server 2012.
The idea of "Just-in-Time" access control where "a user is granted access to a resource only while she needs it, and that access is revoked when she is done" is terrifying when it takes weeks or months to get action on support tickets that I submit (where the action is simple, and I tee it up with a detailed description of whatever I need done).
[+] [-] codemac|4 years ago|reply
[+] [-] EthanHeilman|4 years ago|reply
[+] [-] vmception|4 years ago|reply
[+] [-] suns|4 years ago|reply
[+] [-] raxxorrax|4 years ago|reply
edit: The source of my claim that governments tend to extend surveillance is pretty well documented I believe. So much so that I believe it is worthy to insert the problem into debates about anything relating to security. Because security often serves as the raison d'être for such ambitions.
[+] [-] dc-programmer|4 years ago|reply
[+] [-] pitaj|4 years ago|reply
[+] [-] nextos|4 years ago|reply
SMS are bad due to MITM and SIM cloning. In EU many banks still use smsTAN, and it leads to lots of security breaches. It's frustrating some don't offer any alternatives.
However, is FIDO2 better than chipTAN or similar? I like simple airgapped 2FAs, but I'm not an expert.
[+] [-] melony|4 years ago|reply
[+] [-] Alex3917|4 years ago|reply
[+] [-] mnd999|4 years ago|reply
[+] [-] cabbagehead|4 years ago|reply
[deleted]
[+] [-] ctime|4 years ago|reply
The reason this was so "easy" for Google (and some other companies, like GitLab[1]) to realize most of these goals is that they are a web-based technology company - fundamentally the tooling and scalable systems needed to get started were web so the transition were "free". Meaning, most of the internal apps were HTTP apps, built on internal systems, and the initial investment was just to make an existing proxied internal service, external and behind a context aware proxy [1].
The hard part for most other companies (and the DoD) is figuring out what to do with protocols and workflows that aren't http or otherwise proxyable.
[1] https://cloud.google.com/iap/docs/cloud-iap-context-aware-ac...
[2] https://about.gitlab.com/blog/2019/10/02/zero-trust-at-gitla...
[+] [-] amluto|4 years ago|reply
[+] [-] wordsarelies|4 years ago|reply
This is a windfall for Gov't contractors.
[+] [-] pledess|4 years ago|reply
Pre-Zero-Trust days seemed safer. Copying production data to a laptop wasn't allowed. Instead, each SRE had their own Linux VM in the data center, accessible from home and able to run the scripts (with connectivity to the enterprise application). This prevented a whole class of realistic attacks in which a laptop (while unlocked/decrypted) is taken by an adversary. Admittedly, in return, we're protected from a possible, but less likely, attack in which a Linux VM is compromised and used for lateral movement within one segment of the enterprise network. (An enrolled device has to be in the user's possession; it can't be any machine, Linux or Windows, in the data center or office.)
The only people who love this are our enterprise application vendors. Our bosses are paying them a TON more money to implement new requirements where, in theory, all possible types of data analysis can be done directly within the enterprise application. No more scripts, no more copying of data. No more use of Open Source. And, of course, people from these same enterprise application vendors advise the government that Zero Trust must be a top priority mandate.
[+] [-] solatic|4 years ago|reply
Nobody cares. It just gets postponed forever.
[+] [-] tims33|4 years ago|reply
Btw - I'd love to see the people who put this memo together re-evaluate the ID.me system they're implementing for citizens given how poor the identity verification is.
[+] [-] unethical_ban|4 years ago|reply
TOTP is a great security enhancement, and while phishable, considerably raises the bar for an attacker.
The fact that TOTP is mentioned as a bad practice in this document is an indicator that this should not be considered a general best practices guide. It is a valid best practice guide for a particular use case and particular user base.
[+] [-] imrejonk|4 years ago|reply
This can be solved with DANE, which is based on DNSSEC. When properly configured, the sending mailserver will force the use of STARTTLS with a trusted certificate. The STARTTLS+DANE combination has been a mandatory standard for governmental organizations in the Netherlands since 2016.
[+] [-] ineedasername|4 years ago|reply
Finally! Maybe the places I've worked will finally listen. But I stopped reading TFA to praise this, so back to TFA.
[+] [-] KarlKemp|4 years ago|reply
0: trade only works if the sum of your trust in the legal system, intermediates, and counterparts reaches some threshold. The same is true of any interaction where the payoff is not immediate and assured, from taxes to marriage and friendship, and, no, it is not possible to eliminate it, nor would that be a society you’d want to live in. The only systems that do not rely on some trust that the other person isn’t going to kill them are maximum-security prisons and the US president’s security bubble. Both are asymmetric and still require trust in some people, just not all.
[+] [-] Terretta|4 years ago|reply
> “discontinue support for protocols that register phone numbers for SMS or voice calls, supply one-time codes, or receive push notifications."
... necessarily means TOTP.
Could be argued "supply" means code-over-the-wire, so all 3 being things with a threat of MITM or interception: SMS, calls, "supply" of codes, or push. Taken that way, all three fail the "something I have" check. So arguably one could take "supply one-time codes" to rule out both what HSBC does, but also what Apple does pushing a one-time code displayed together with a map to a different device (but sometimes the same device).
I'd argue TOTP is more akin to an open soft hardware token, as after initial delivery it works entirely offline, and passes the "something I have" check.
[+] [-] upofadown|4 years ago|reply
Encrypted messaging has been a complete failure; there is no need to single out email. I suspect the reason is more or less the same in all cases. Users have not been provided with a conceptual framework that would allow them to use the tools in a reasonable way. If the US federal government can come up with, and promote such a framework the world would become a different place.
BTW, the linked article is mostly based on misconceptions:
* https://articles.59.ca/doku.php?id=pgpfan:tpp
[+] [-] adreamingsoul|4 years ago|reply
Its simple amazing.
[+] [-] fire|4 years ago|reply
For context, Impossible Travel is typically defined as an absolute minimum travel time between two points based on the geographical distance between them, with the points themselves being derived from event-associated IPs via geolocation
The idea is that if a pair of events breaches that minimum travel time by some threshold, it's a sign of credential compromise; It's effective for mitigating active session theft, for example, as any out of region access would violate the aforementioned minimum travel time between locations and produce a detectable anomaly
[+] [-] MrYellowP|4 years ago|reply
[+] [-] scarmig|4 years ago|reply
I'm curious about the DNS encryption recommendation. My impression was that DNSSEC was kind of frowned upon as doing nothing that provides real security, at least according to the folks I try to pay attention to. Are these due to differing perspectives in conflict, or am I missing something?
[+] [-] paganel|4 years ago|reply
This screams "we'll use more post-it notes for our passwords compared to before", or maybe the real world to which this memo is addressed is different compared to the real (work-related) world I know.
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] totony|4 years ago|reply
[+] [-] uncomputation|4 years ago|reply
Isn’t exposing your internal domains and systems outside VPN-gated access a risk? My understanding is this means internaltool.faang.com should now be publicly accessible.
[+] [-] CosmicShadow|4 years ago|reply