I'm so happy to see this. As Bruce Schneier (who runs an open WiFi network at home) explains, "if my computer isn't secure on a public network, securing my own network isn't going to reduce my risk very much."[1]
The same is true for corporate applications (and devices like printers). If they're not secure on a public network, securing the corporate network won't reduce their risk that much: they're still exposed to potential breaches elsewhere in the corporate network.
There are other, valid reasons to not run a public access point. Not wanting neighbors to steal your bandwidth, run a TOR node off it, or host illegal content, for example. All of these activities could get you removed from your ISP, and even taken to court. While you could probably prove your innocence in court, I can not imagine why taking the risk for absolutely no personal benefit is worth the risk. I don't really see how running an open wifi network shows anything other than ignorance of the risks.
For what it's worth, Bruce Schneier also has a machine air gapped from the internet. Different scenario from a huge corporate LAN or even a small network with a wireless access point to be sure, but there's a number of ways to interpret that quote.
We tried this where I worked (with the exception of the evil desktop app financial program)... and had to retract after a zero day defacement in one of our web apps. In the meantime we also learned that keeping all of your web apps 100% up to date at all times is really freaking difficult. The good news is that the (failed) attempt got us off of a few client side applications and made us much more platform agnostic than we were before.
If you have the resources of Google it's a bit different, especially if all of the software is custom and developed internally.
You can't succeed with this model by just setting your firewall to allow 0.0.0.0/0. This approach still requires defense in depth, and a holistic view of security. If someone was able to deface your web app, then your company wasn't actually using all the components that are required to make this model work (such as authenticated devices, device patch management, and user 2-factor authentication).
The only way to succeed with this is with heavy firewalling or VPNs. There are several unknown zero days in any application so just by opening up your application to 0.0.0.0/0 makes it possible for blackhats to get in. The only question is how much your information is worth for somebody. If you it is less than price of a brand new zero day you might be ok, but there are still the script kiddies and political blackhat organizations who mass deface any site that has a "zero day" vuln, (zero day means in this case: unknown to the operators of the site).
Finally, the zero trust network has its day. I've been following this for quite a while, especially since this kind of architecture makes even more sense for smaller businesses than large ones that can pay for sophisticated network-edge protection.
Yes, I submitted this yesterday (instead of the article submitted here, because I found it to be deeper and did not want to submit 2 links at the same time), but sadly it failed to gain traction.
As some other folks have pointed out (and contrary to what the headline implies), there's isn't just setting your firewall to allow 0.0.0.0/0. In particular, pay attention to the Device Identity (client cert) and the Access Proxy parts.
What's left implied but unstated in this post is that a corporate intranet is often in practice as vulnerable as the internet -- from unpatched Windows to old Android versions to people plugging in random USB dongles to a million variations on XSS/XSRF, once you've made your corporate network secure against these attackers it's also secure against the wider internet.
Few workplaces are fond of remote workers. The major reason a lot of people remain employed is so they have a purpose to wake up, leave their houses, and spend the day occupied by the relative comfort of an office building, surrounded by reasonably-intelligent coworkers, as a faux-family. And it's a slap in their face that you don't want to spend your time basking in their physical proximity.
Google doesn't like workers who always work remotely, but periodically working from home (or the bus, or hotel, or coffeeshop, or the lobby at the auto mechanic, etc.) for a day is common for all employees.
It isn't just corporations that are not fond of remote work. As a lay-employee, I am not fond of remote work either. I would much rather have all my teammates in the same room.
It's hard to fully trust a person that is remote. Are they keeping your data safe?
Another solution is the idea of Amazons virtual workspaces for remote workers, this allows the company to have more control over the computer that the employee is using.
I wonder how far this really extends into their network and how ipv6 is related. In principal it sounds really good to me. I realize this is mostly about access to corporate applications, but how much further could this approach go?
Thinking out loud, if I suddenly removed the firewall perimeter security from my network, moved security to devices/servers directly, dropped my NAT, switched to ipv6 with all publicly routable addresses, my network infrastructure simplifies incredibly. However, I do have to still protect my network to ensure network quality of service/availability and protect my devices/equipment from "public attacks". I guess the principal here is, the surface area that can be attacked is the same if you can penetrate the layered security approach - it all ends in the devices and equipment.
The fact that all devices/equipment can now have an publicly routable/addressable IP in ipv6 solves the problem of running out of address space, and would fit hand in glove with such an architecture.
Put another way, the network becomes just the network, without the need to discern between the intranet/LAN, the extranet/WAN (or DMZ) and the Internet/WAN.
This seems like the sort of thing that can work for Google because Google runs on Google software, which runs on Google hardware. They control the full stack from top to bottom, so they can decide where to put the doors and where to put the monitoring.
Most companies run on stuff that is not their own. Microsoft Exchange running on Windows running on VMWare running in some 3rd party datacenter is a fairly modern way to host an email server. In that situation, everything is out of your hands BUT the network edge. You don't audit Microsoft's code bases, you don't specify how Webmail works, you don't control the discovery, disclosure, or patching of critical vulnerabilities.
Sure maybe the firewall/IDS/VPN only keeps amateur griefers out, but there are way more of those than APTs.
And folks will only have limited insight into the internals of all this 3rd-party software. But if you have a gated network, then you can use a tool like NetWitness to characterize and alert on your traffic--and just your traffic.
"The Cloud" that they're talking about is their own datacenters (they're certianly not using EC2) and they're hosting their web-apps over a WAN without VPNs or other traditional forms of closing off access.
However, this doesn't say much about their datacenters which will still be heavily firewalled. IPMI, SSH, and other access wouldn't be shared over a wide open WAN. The "Cloud" (see: datacenter) LAN will still be protected traditionally.
This article doesn't have enough information in my opinion.
I mean, of course it's their own datacenters. They're not going to be putting their corporate data in a competitor's data center. That said, they could be leveraging new things in their Google App Engine cloud, which would actually make it "the cloud" as people refer to it.
This kills BYOD, right, at least for now? "Employees can only access corporate applications with a device that is procured and actively managed by the company"
It seems to me that any company whose business involves providing secure web apps to external users (who aren't using devices specially trusted by the company providing the service) ought to be able to provide its corporate applications on the same basis.
One of the biggest reasons for BYOD is so you don't need to deal with the crappy supplied hardware of your employer and you can bring your own. I'm pretty sure that Google employees get their pick of hardware so this would remove at least one big reason for BYOD.
This brings up a few questions:
1) Does Google not use the same publicly hosted version of Google Apps that we all use?
2) Does this only work with privately hosted versions of applications?
3) Are they using the publicly hosted version of Google Auth for the authentication piece?
4) Is the Device Inventory Database hosted on a public machine or is that deployed to a private network?
5) digging into the white paper that provides a bit more information on how they're actually doing this… does anyone care to take a crack at explaining what this means? "BeyondCorp defines and deploys an unprivileged network that very closely resembles an external network, although within a private address space. The unprivileged network only connects to the Internet, limited infrastructure services (e.g., DNS, DHCP, and NTP), and configuration management systems such as Puppet.”
(full white paper published by google available here: http://static.googleusercontent.com/media/research.google.co...)
It's nice to see companies moving towards taking security more seriously, but boy oh boy some of it is a real pain in the ass. Every website you log into now needs to text you, or have a companion app or whatever. Every time you lock your screen to get a cup of water or take a leak, you have to log back in, wait for your flaky vpn connection to come back before you can resume what you were doing (maybe not an issue here if they actually do away with the VPN). "Credentials" often comes down to typing very long cryptic passphrases, on a glass screen, with dots instead of being able to see what you're typing. Et cetera.
The probably falls in the general category of a Good Thing™ for employees and people developing B2B applications since internal systems are more easily accessible but, this will be a gut check/squeaky bum time for traditional on-premises B2B vendors like PeopleSoft/SAP/IBM and the like. The corporate firewall has always been a bastion of security they have been able to hide their applications behind. As the concept of a corporate firewall begins to fade their security risk increases and previously non-worrisome attack vectors become serious problems for them.
Insecure enterprise software and poor security practices mean death to firewalls and to vpns will take a long, long time. People hide behind these things for a reason.
Google is right here though, this makes things easier for employees and probably saves them money (no vpn); unfortunately most orgs don't have the staff/expertise to pull something like this off.
More importantly though, I think google builds all their enterprise web apps in-house (speculating). Most orgs who do have intranet apps use 3rd party off the shelf software so pulling off Google's BeyondCorp architecture is less likely as they can't control or easily modify how they work. Ergo, VPNs are here to stay.
Even for those orgs who write their own internal applications: do you really want to expose your internal analytics dashboard to the internet?! GASP.
Never heard this expression before. Quick search defines it as "An exciting part of a sporting event, particularly the final minutes of a close game or season". Unfortunately, I still don't really get the reference. Could someone spell this out for me?
What has changed? I worked as a contractor at Google in 2013. Everyone had a securely locked down company laptop. All logins anywhere required a dual auth device.
I was really sick one day and I had no problems doing my work from home. Also, one of the great joys of working at Google is the availability of code labs that are individualized instruction to learn different aspects of their infrastructure and technology in general. I spent a ton of time when at home working through code labs that were relevant to my job. No problems with remote access.
Nothing new here.. I work for Microsoft. we have had most of our tools in the cloud for quite a while. One thing's for sure - every new app is cloud based. We use Azure AD and multi factor auth to allow access from internal and external networks. It's pretty common with small\medium companies I work with, maybe less with large enterprises.
I think I'm missing something important here. I understand it as far as "internal networks give people a false sense of security," but it's still worth something, isn't it? Why not implement all of these security features AND keep your internal network locked up? Is it really just convenience?
The biggest problem with VPNs these days is that they connect a user to the network, not just to the applications they need to access. Any malware on the users device can ride the VPN into the network and start having fun. Sure you can micro-segment the network to limit the damage but at least the Google approach puts all traffic through an "Internet-Facing Access Proxy" limiting exposure to the individual applications in question.
However, I completely agree with the previous post that the user devices need to be considered untrusted. This is a huge problem with the Google approach. Certificate distribution and management on thousands of employee owned devices is not practical nor scalable.
I wonder which "cloud hosting provider" they will choose.
Microsoft? Amazon?
Does it make a difference?
If my company starts selling cloud hosting and then I announce my company will be hosting its internal applications in "the cloud" (i.e., in my own data centers), what are the security implications for my company?
Are they the same if some other company asks me to host their applications in my data centers?
Is this article a PR piece (or "submarine" as PG calls it)?
[+] [-] cs702|11 years ago|reply
The same is true for corporate applications (and devices like printers). If they're not secure on a public network, securing the corporate network won't reduce their risk that much: they're still exposed to potential breaches elsewhere in the corporate network.
--
[1] https://www.schneier.com/blog/archives/2008/01/my_open_wirel...
[+] [-] Afforess|11 years ago|reply
[+] [-] dzhiurgis|11 years ago|reply
Google did not just throw away this layer, but replaced with device authentication. They are essentially using two factor authentication.
[+] [-] doctorshady|11 years ago|reply
[+] [-] Istof|11 years ago|reply
[+] [-] furyg3|11 years ago|reply
If you have the resources of Google it's a bit different, especially if all of the software is custom and developed internally.
[+] [-] tedchs|11 years ago|reply
[+] [-] tootie|11 years ago|reply
[+] [-] istvan__|11 years ago|reply
[+] [-] mikecb|11 years ago|reply
Edit: Great talk at lisa in 2013: https://www.usenix.org/conference/lisa13/enterprise-architec...
[+] [-] omnibrain|11 years ago|reply
[+] [-] rconti|11 years ago|reply
[+] [-] jldugger|11 years ago|reply
[+] [-] tjohns|11 years ago|reply
http://static.googleusercontent.com/media/research.google.co...
As some other folks have pointed out (and contrary to what the headline implies), there's isn't just setting your firewall to allow 0.0.0.0/0. In particular, pay attention to the Device Identity (client cert) and the Access Proxy parts.
[+] [-] evmar|11 years ago|reply
[+] [-] Splendor|11 years ago|reply
> "The new model — called the BeyondCorp initiative — assumes that the internal network is as dangerous as the Internet."
[+] [-] passive|11 years ago|reply
Of course, there's a certain irony that Google isn't fond of remote workers. :)
[+] [-] stephengillie|11 years ago|reply
[+] [-] munificent|11 years ago|reply
[+] [-] zenbowman|11 years ago|reply
Tribal? Yes, unapologetically so.
[+] [-] dm2|11 years ago|reply
It's hard to fully trust a person that is remote. Are they keeping your data safe?
Another solution is the idea of Amazons virtual workspaces for remote workers, this allows the company to have more control over the computer that the employee is using.
[+] [-] sixdimensional|11 years ago|reply
Thinking out loud, if I suddenly removed the firewall perimeter security from my network, moved security to devices/servers directly, dropped my NAT, switched to ipv6 with all publicly routable addresses, my network infrastructure simplifies incredibly. However, I do have to still protect my network to ensure network quality of service/availability and protect my devices/equipment from "public attacks". I guess the principal here is, the surface area that can be attacked is the same if you can penetrate the layered security approach - it all ends in the devices and equipment.
The fact that all devices/equipment can now have an publicly routable/addressable IP in ipv6 solves the problem of running out of address space, and would fit hand in glove with such an architecture.
Put another way, the network becomes just the network, without the need to discern between the intranet/LAN, the extranet/WAN (or DMZ) and the Internet/WAN.
[+] [-] superuser2|11 years ago|reply
Almost no one can actually route "publicly routable" IPv6. When it becomes a standard feature of DSL/cable, maybe.
[+] [-] snowwrestler|11 years ago|reply
Most companies run on stuff that is not their own. Microsoft Exchange running on Windows running on VMWare running in some 3rd party datacenter is a fairly modern way to host an email server. In that situation, everything is out of your hands BUT the network edge. You don't audit Microsoft's code bases, you don't specify how Webmail works, you don't control the discovery, disclosure, or patching of critical vulnerabilities.
Sure maybe the firewall/IDS/VPN only keeps amateur griefers out, but there are way more of those than APTs.
And folks will only have limited insight into the internals of all this 3rd-party software. But if you have a gated network, then you can use a tool like NetWitness to characterize and alert on your traffic--and just your traffic.
[+] [-] NovaS1X|11 years ago|reply
However, this doesn't say much about their datacenters which will still be heavily firewalled. IPMI, SSH, and other access wouldn't be shared over a wide open WAN. The "Cloud" (see: datacenter) LAN will still be protected traditionally.
This article doesn't have enough information in my opinion.
[+] [-] mynameisvlad|11 years ago|reply
[+] [-] rdl|11 years ago|reply
[+] [-] bduerst|11 years ago|reply
It also means you allow your device to be managed remotely by the company (i.e. purged if lost/stolen).
[+] [-] dragonwriter|11 years ago|reply
[+] [-] mikecb|11 years ago|reply
[+] [-] dantiberian|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] grantlmiller|11 years ago|reply
[+] [-] serve_yay|11 years ago|reply
[+] [-] netheril96|11 years ago|reply
[+] [-] atlbeer|11 years ago|reply
[+] [-] rmac|11 years ago|reply
Google is right here though, this makes things easier for employees and probably saves them money (no vpn); unfortunately most orgs don't have the staff/expertise to pull something like this off.
More importantly though, I think google builds all their enterprise web apps in-house (speculating). Most orgs who do have intranet apps use 3rd party off the shelf software so pulling off Google's BeyondCorp architecture is less likely as they can't control or easily modify how they work. Ergo, VPNs are here to stay.
Even for those orgs who write their own internal applications: do you really want to expose your internal analytics dashboard to the internet?! GASP.
[+] [-] JTon|11 years ago|reply
> squeaky bum time
Never heard this expression before. Quick search defines it as "An exciting part of a sporting event, particularly the final minutes of a close game or season". Unfortunately, I still don't really get the reference. Could someone spell this out for me?
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] mark_l_watson|11 years ago|reply
I was really sick one day and I had no problems doing my work from home. Also, one of the great joys of working at Google is the availability of code labs that are individualized instruction to learn different aspects of their infrastructure and technology in general. I spent a ton of time when at home working through code labs that were relevant to my job. No problems with remote access.
[+] [-] itaysk|11 years ago|reply
[+] [-] rev_bird|11 years ago|reply
[+] [-] priyajv99|11 years ago|reply
However, I completely agree with the previous post that the user devices need to be considered untrusted. This is a huge problem with the Google approach. Certificate distribution and management on thousands of employee owned devices is not practical nor scalable.
https://oxter.in
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] ceph_|11 years ago|reply
[+] [-] garretts|11 years ago|reply
[+] [-] jingo|11 years ago|reply
Microsoft? Amazon?
Does it make a difference?
If my company starts selling cloud hosting and then I announce my company will be hosting its internal applications in "the cloud" (i.e., in my own data centers), what are the security implications for my company?
Are they the same if some other company asks me to host their applications in my data centers?
Is this article a PR piece (or "submarine" as PG calls it)?
What do you think?
[+] [-] jreed91|11 years ago|reply