This is super interesting and ambitious and I'm always sort of envious when I look at big engineering orgs, like Dropbox and Slack, where everything has fine-grained instrumentation.
We do this kind of work for startups, and we have a small roster of companies where we're on the hook for this kind of corpsec.
But then I think about what it'd be like to actually deploy and operationalize this stuff, and it gives me pause.
I have two big questions --- real questions, like, I have no pretense of having an answer and am speaking from ignorance --- about how this stuff works in practice.
The first is privacy. They're deploying this across a fleet of company laptops. People do all sorts of stuff on their corp laptops. I don't know enough about Dropbox's company culture to know whether people ever use their machines for personal stuff, but I do know that occasional personal use is an SFBA startup norm. They're collecting essentially a continuous bash history from every user in their fleet. How comfortable are engineers with that? I don't have a strong opinion, but it gives me enough pause that I'd be a little afraid to ask a client to do it.
The second is: what do you do with all that information? I see how this addresses malware response, and I see how this would be useful as a forensic archive for general IR. But as a day-to-day tool, connection-by-connection, file-by-file granular data from a mostly technical team seems incredibly noisy. Most engineers look like attackers: they make out-of-process connections to random backend resources, copy things around, and run new code.
I acknowledge right away that my vantage point is small startups, not organizations running at the scale Dropbox is.
> They're deploying this across a fleet of company laptops ... How comfortable are engineers with that?
I worked for the corporate infrastructure team at Dropbox, at the time my team was responsible installing/managing things like osquery on laptops, with the data going to the security team's tools, so I have thought about this. Speaking for myself, the Dropbox culture takes trustworthiness very seriously. "Be worthy of trust" is #1 on the list of company core values, and it is deeply embedded into the culture. The trust covers so much of how the company operates, including privacy. Spying on employees is so against company culture, even though I knew exactly what data being sent, I trusted the security team would not use that for anything other than protecting me. The detection and response team is made up of the most security and privacy conscious people I've ever met.
I think the mindset of respecting coworker privacy is important for anyone in corporate infrastructure, including network and system administrators.
But I've worked for other big corporations too, without that trust. I think employees need to know that when they use corporate computers and communications systems, they aren't private. Don't do stuff you shouldn't be doing at work.
> what do you do with all that information? I see how this addresses malware response, and I see how this would be useful as a forensic archive for general IR. But as a day-to-day tool, connection-by-connection, file-by-file granular data
When I worked there the security team reached out to me to ask about anomalous behavior, so there is active detection going on. That was a couple years ago, I'm sure they have improved since then too. Keep in mind, they have a full time detection and response team, in addition to all the other security teams.
"you may run into unexpected performance issues that make the machine nearly unusable by your employees. You might also experience issues like having hosts unexpectedly shut down due to a kernel panic"
It is fairly evident they are talking about carbon black.
Carbon black's kernel modules panics in 10.13 and when you use it for development workloads it eats up half of your CPU in kernel space, making the entire machine unresponsive.
All of the large SFBA tech companies have something like carbon black, and they proxy your DNS requests with openDNS. Every network interaction is logged and so is whatever you do on your computer, personal or not. Whatever controls they have over security engineers looking over that info is probably not enough. Creepy stalker sysadmin is a tech industry scandal waiting to happen.
I've also heard rumors at some companies they even give reports of what employees are doing on their computers to managers, and managers are told not to tell their reports. I also think google records every keystroke. Those are rumors so take that with a big grain of salt.
I don't like it, but if I want SFBA pay, I've come to accept there is no privacy on work computers. The only things I do on my work computer is HN & Twitter. Things that are effectively public. Everything else goes through my personal smartphone. I have my own computers, and in the end it mostly means I have to carry 5 extra pounds in my backpack sometimes.
People really shouldn't use their corp laptops for personal things. I know they do, but there really shouldn't be any legitimate expectation of privacy on your corporate laptop; not only do you sign a contract stating as such, but anything you create on there (on company time or not) is company IP, whether related to your work or not. California is very careful about protecting employee IP if done on personal time, but only if done without corporate devices.
I used to work at Dropbox (as a PM, not a dev so I can't speak to that specific experience).
It was made pretty clear during orientation that there was some monitoring software installed on your computer that needed to be installed so we could keep our various compliances (HIPAA, PCI, etc). I think they were pretty clear that humans were not actively reviewing these logs (i.e. a computer would flag activity based on a heuristic) which definitely made me feel better. I was friends with people on the internal security team, and it was very obvious that they took employee privacy very seriously, at least from a philosophical perspective.
Obviously, people used their work computers for personal things. People installed Steam, Skype, whatever. I never heard of anyone getting in trouble for using that kind of software.
Security actually contacted me about my usage once. I was doing some Wikipedia browsing about password cracking cracking software and I noticed that there was one program in particular (Cain and Abel) that was exclusively for Windows. That struck me as weird, because I expected most security software to be run on *nix systems. Anyway, I went to the Cain and Abel homepage, browsed around a bit and then went on my merry way.
The next day, I get a somewhat alarming email from the security team alerting me that my computer had been compromised and removed from the network. I reached out to the guy who sent the email over Slack, and I go to his desk for a quick meeting. I ask him what the deal is, and he checks my Macbook's S/N against the list of flagged machines. He asked me if I had gone to "www.oxid.it" and at that point I freaked out a bit because I didn't recognize the domain. We searched the domain, and it was at that point I realized it was the same software I was looking up the previous day, and told him the site visit had been intentional. He explained that visiting the site is behavior consistent with a compromised machine and that would explain why I had gotten flagged.
I pointed out that C&A is Windows only software and I was running OSX, and he gave me the (somewhat unsatisfying) answer that you can't be too careful. After that, he told me to be careful and sent me on my way.
Overall, I had a pretty positive perception of the internal security team. They had a strong reputation and seemed to execute competently without being obstructionist.
I hope this answer gives you a bit of insight. I'd be interested to see how other shops do it.
Making a large amount of this kind of data operational in real time seems like a large task. I'm wondering if it's mostly used as a postmortem tool to determine the attack vector and close that hole going forward.
Also I am interested to know if this data would fall under GDPR for any employees in the EU, and if so, how that data could be scrubbed if the employee left the company. In the US it's assumed that any information about actions on corporate hardware is owned by the company, but maybe that's not true outside US borders.
My understanding is that it's actually not all that common for engineers to do quite the same things attackers do, since a lot of intel is fairly specific. Sure, abnormal network or process behavior might cause interest, but if it's not connecting to "superbadmalware.org" or running "known-exploit.exe", a dev probably doesn't cause very many false positives. It probably depends on the company, and the people looking out for that company will know the difference.
As for what to do with all that information, it's definitely a great question. Right now I think the answer is person-driven (get eyeballs on the intel), but obviously that's not going to last. Everyone in the space (AFAIK) is trying to take what analysts do, and productize it somehow.
I'm sure you've got insight into how that's happening on your end, it's a pretty straightforward leap when faced with, "Do I scale my service?" questions.
> People do all sorts of stuff on their corp laptops. I don't know enough about Dropbox's company culture to know whether people ever use their machines for personal stuff, but I do know that occasional personal use is an SFBA startup norm.
Hmm I wonder if this is true or not, or if it’s a startup thing. As someone who has worked for BigCorp Inc. companies, I know I carefully maintain a strict separation between my work and personal devices, for privacy reasons and simply to follow company policies WRT confidentiality. Is this “crossing the streams” between work and personal stuff that common in the startup world? I’d think at the very least even a small startup would want to keep their secret IP off personal devices and limit risky horseplay on company equipment.
They don't really. Objective See's tools are not bad for what they are (utilities), but they are lacking in:
1. Code quality (Patrick Wardle is not an engineer and it shows in his code).
This is not terribly important for an end-user utility
but becomes very important when you're deploying kernel
modules across your entire organization that need to be
absolutely rock solid and not introduce additional threat
vectors.
2. Distributed nature (They're utilities meant to be executed
by end-users on their own machines, not distributed agents
syncing up with cloud servers)
After they praised OSS a lot in that post, one (me ;)) would have assumed they announce the open source release of their "plumbing" code to make it all work nicely, build process trees, etc.
Trail of Bits has done a study on how large technology companies are increasingly switching to osquery for their endpoint monitoring needs. You can read the results here:
We also offer a commercial service to make custom modifications, bugfixes, and feature enhancements to osquery. It's little known at this point, but we do the same for Google Santa too!
I started to do something like this for my Windows box but then I realized that tracking process execution is not all that useful if you don't also track DLLs (because it's simple to just dump DLLs in a trusted applications folder and wait for them to be loaded). Tracking DLLs would just produce too much noise to monitor which put me off the whole thing altogether.
Shameless plug but anyone interested in deploying osquery and google's santa into their environment. Should check out https://www.zercurity.com - it supports Linux and Windows too.
OpenBSM is awesome except you’re forced to invent your own way of log gathering - which becomes more painful when you’re mobile or offline and then you’ve got to keep state on what’s been transmitted to the mothership.
Would be nice for some insight into Dropbox’s solution here...
To address logging offline you need a log shipper that will do reliable logging and pick of where it left off. I think rsyslog, Elastic Beats, and Splunk forwarder will all do that. Then logs are sent when a machine connects to a network.
For mobile (online but outside corporate network) there are two options I've heard of being done:
1. Have each endpoint have a unique TLS certificate, and have the log shipper do mutual TLS to the logging server which has a public IP.
2. Have a backhaul VPN that is allows connected, automatically, to the monitoring network, and send the logs over that. That VPN is different than the user VPN that gives access to the corporate network.
This is nice, but what would be better is the glue they use to integrate them into other monitoring solutions. How do these merge these tools into their existing infrastructure? Do they parse the logs from BSM, santa and osquery into ELK? If so, how.
The real difficulty is not finding useful open source tools, it is integrating them into existing monitoring solutions used within an organisation to get a single view of activity on a system and on a collection of systems (i.e. how do you make the tool scale).
Was anybody else thinking these tools would be great for general development and debugging purposes? Anything that uses the network or file system anyway. The fact that they can detect malware reads as just an aside to me.
There is also intellectual property issue. In my case whatever code I write on corporate laptop belongs to the company I work for so it’s kinda wrong to use it for pet projects.
[+] [-] tptacek|8 years ago|reply
We do this kind of work for startups, and we have a small roster of companies where we're on the hook for this kind of corpsec.
But then I think about what it'd be like to actually deploy and operationalize this stuff, and it gives me pause.
I have two big questions --- real questions, like, I have no pretense of having an answer and am speaking from ignorance --- about how this stuff works in practice.
The first is privacy. They're deploying this across a fleet of company laptops. People do all sorts of stuff on their corp laptops. I don't know enough about Dropbox's company culture to know whether people ever use their machines for personal stuff, but I do know that occasional personal use is an SFBA startup norm. They're collecting essentially a continuous bash history from every user in their fleet. How comfortable are engineers with that? I don't have a strong opinion, but it gives me enough pause that I'd be a little afraid to ask a client to do it.
The second is: what do you do with all that information? I see how this addresses malware response, and I see how this would be useful as a forensic archive for general IR. But as a day-to-day tool, connection-by-connection, file-by-file granular data from a mostly technical team seems incredibly noisy. Most engineers look like attackers: they make out-of-process connections to random backend resources, copy things around, and run new code.
I acknowledge right away that my vantage point is small startups, not organizations running at the scale Dropbox is.
[+] [-] antoncohen|8 years ago|reply
I worked for the corporate infrastructure team at Dropbox, at the time my team was responsible installing/managing things like osquery on laptops, with the data going to the security team's tools, so I have thought about this. Speaking for myself, the Dropbox culture takes trustworthiness very seriously. "Be worthy of trust" is #1 on the list of company core values, and it is deeply embedded into the culture. The trust covers so much of how the company operates, including privacy. Spying on employees is so against company culture, even though I knew exactly what data being sent, I trusted the security team would not use that for anything other than protecting me. The detection and response team is made up of the most security and privacy conscious people I've ever met.
I think the mindset of respecting coworker privacy is important for anyone in corporate infrastructure, including network and system administrators.
But I've worked for other big corporations too, without that trust. I think employees need to know that when they use corporate computers and communications systems, they aren't private. Don't do stuff you shouldn't be doing at work.
> what do you do with all that information? I see how this addresses malware response, and I see how this would be useful as a forensic archive for general IR. But as a day-to-day tool, connection-by-connection, file-by-file granular data
When I worked there the security team reached out to me to ask about anomalous behavior, so there is active detection going on. That was a couple years ago, I'm sure they have improved since then too. Keep in mind, they have a full time detection and response team, in addition to all the other security teams.
[+] [-] woolvalley|8 years ago|reply
It is fairly evident they are talking about carbon black.
Carbon black's kernel modules panics in 10.13 and when you use it for development workloads it eats up half of your CPU in kernel space, making the entire machine unresponsive.
All of the large SFBA tech companies have something like carbon black, and they proxy your DNS requests with openDNS. Every network interaction is logged and so is whatever you do on your computer, personal or not. Whatever controls they have over security engineers looking over that info is probably not enough. Creepy stalker sysadmin is a tech industry scandal waiting to happen.
I've also heard rumors at some companies they even give reports of what employees are doing on their computers to managers, and managers are told not to tell their reports. I also think google records every keystroke. Those are rumors so take that with a big grain of salt.
I don't like it, but if I want SFBA pay, I've come to accept there is no privacy on work computers. The only things I do on my work computer is HN & Twitter. Things that are effectively public. Everything else goes through my personal smartphone. I have my own computers, and in the end it mostly means I have to carry 5 extra pounds in my backpack sometimes.
[+] [-] borski|8 years ago|reply
IANAL, etc.
[+] [-] tschwimmer|8 years ago|reply
It was made pretty clear during orientation that there was some monitoring software installed on your computer that needed to be installed so we could keep our various compliances (HIPAA, PCI, etc). I think they were pretty clear that humans were not actively reviewing these logs (i.e. a computer would flag activity based on a heuristic) which definitely made me feel better. I was friends with people on the internal security team, and it was very obvious that they took employee privacy very seriously, at least from a philosophical perspective.
Obviously, people used their work computers for personal things. People installed Steam, Skype, whatever. I never heard of anyone getting in trouble for using that kind of software.
Security actually contacted me about my usage once. I was doing some Wikipedia browsing about password cracking cracking software and I noticed that there was one program in particular (Cain and Abel) that was exclusively for Windows. That struck me as weird, because I expected most security software to be run on *nix systems. Anyway, I went to the Cain and Abel homepage, browsed around a bit and then went on my merry way.
The next day, I get a somewhat alarming email from the security team alerting me that my computer had been compromised and removed from the network. I reached out to the guy who sent the email over Slack, and I go to his desk for a quick meeting. I ask him what the deal is, and he checks my Macbook's S/N against the list of flagged machines. He asked me if I had gone to "www.oxid.it" and at that point I freaked out a bit because I didn't recognize the domain. We searched the domain, and it was at that point I realized it was the same software I was looking up the previous day, and told him the site visit had been intentional. He explained that visiting the site is behavior consistent with a compromised machine and that would explain why I had gotten flagged.
I pointed out that C&A is Windows only software and I was running OSX, and he gave me the (somewhat unsatisfying) answer that you can't be too careful. After that, he told me to be careful and sent me on my way.
Overall, I had a pretty positive perception of the internal security team. They had a strong reputation and seemed to execute competently without being obstructionist.
I hope this answer gives you a bit of insight. I'd be interested to see how other shops do it.
[+] [-] Dangeranger|8 years ago|reply
Also I am interested to know if this data would fall under GDPR for any employees in the EU, and if so, how that data could be scrubbed if the employee left the company. In the US it's assumed that any information about actions on corporate hardware is owned by the company, but maybe that's not true outside US borders.
[+] [-] diminoten|8 years ago|reply
As for what to do with all that information, it's definitely a great question. Right now I think the answer is person-driven (get eyeballs on the intel), but obviously that's not going to last. Everyone in the space (AFAIK) is trying to take what analysts do, and productize it somehow.
I'm sure you've got insight into how that's happening on your end, it's a pretty straightforward leap when faced with, "Do I scale my service?" questions.
[+] [-] qaq|8 years ago|reply
[+] [-] ryandrake|8 years ago|reply
Hmm I wonder if this is true or not, or if it’s a startup thing. As someone who has worked for BigCorp Inc. companies, I know I carefully maintain a strict separation between my work and personal devices, for privacy reasons and simply to follow company policies WRT confidentiality. Is this “crossing the streams” between work and personal stuff that common in the startup world? I’d think at the very least even a small startup would want to keep their secret IP off personal devices and limit risky horseplay on company equipment.
[+] [-] draw_down|8 years ago|reply
[deleted]
[+] [-] untangle|8 years ago|reply
https://objective-see.com/products.html
[+] [-] armitron|8 years ago|reply
1. Code quality (Patrick Wardle is not an engineer and it shows in his code). This is not terribly important for an end-user utility but becomes very important when you're deploying kernel modules across your entire organization that need to be absolutely rock solid and not introduce additional threat vectors.
2. Distributed nature (They're utilities meant to be executed by end-users on their own machines, not distributed agents syncing up with cloud servers)
and so on..
[+] [-] discussedbefore|8 years ago|reply
Ask HN: Is no anti-virus software still best practice for mac?
https://news.ycombinator.com/item?id=16904103#16904721
> https://github.com/facebook/osquery/blob/master/packs/osx-at...
Also a discussion of how this type of monitoring worked out in practice in the Google/Uber/Lewandowski case last year:
https://news.ycombinator.com/item?id=13860890#13861475
> the level of detail that Google has over the logs and actions of their laptop
> https://github.com/google/grr
[+] [-] cheerioty|8 years ago|reply
[+] [-] dguido|8 years ago|reply
https://blog.trailofbits.com/2017/11/09/how-are-teams-curren...
https://blog.trailofbits.com/2017/12/21/osquery-pain-points/
https://blog.trailofbits.com/2018/04/10/what-do-you-wish-osq...
We also offer a commercial service to make custom modifications, bugfixes, and feature enhancements to osquery. It's little known at this point, but we do the same for Google Santa too!
https://www.trailofbits.com/services/osquery-support/
[+] [-] newman314|8 years ago|reply
[+] [-] grouseway|8 years ago|reply
[+] [-] hugofromboss|8 years ago|reply
[+] [-] mlosapio|8 years ago|reply
Would be nice for some insight into Dropbox’s solution here...
[+] [-] antoncohen|8 years ago|reply
For mobile (online but outside corporate network) there are two options I've heard of being done:
1. Have each endpoint have a unique TLS certificate, and have the log shipper do mutual TLS to the logging server which has a public IP.
2. Have a backhaul VPN that is allows connected, automatically, to the monitoring network, and send the logs over that. That VPN is different than the user VPN that gives access to the corporate network.
[+] [-] Khaine|8 years ago|reply
The real difficulty is not finding useful open source tools, it is integrating them into existing monitoring solutions used within an organisation to get a single view of activity on a system and on a collection of systems (i.e. how do you make the tool scale).
[+] [-] mekazu|8 years ago|reply
[+] [-] p2t2p|8 years ago|reply
[+] [-] bradknowles|8 years ago|reply
[+] [-] rand39120427389|8 years ago|reply