(no title)
collingreene | 11 years ago
Finding deep, serious vulns like this in software can currently only be done by human beings. Tools are better at being authoritative but can only find vulns of a given type. For example static analysis is a great fit for any vuln that boils down to a dataflow problem, user controlled source -> ... -> dangerous sink. XSS, sql injection, etc fit this model. Fuzzers are great at finding bugs in parsers (and there are a surprising amount of parsers in the world, 90% of which should never have been written). Instrumented dynamic analysis can do awesome work for memory issues. I explain all this to show there are areas where tools are fantastic for their area. But there are many areas for which tools cannot help at all, heartbleed was one of these areas.
The best security tools available were (presumably) run across openssl before and (certainly) with increased scrutiny after heartbleed. None of them found it. Simple limitations in static analysis lead me to believe they would never have found it on their own (most static analysis tools stop at 5 levels of indirection) Some background:
1. http://blog.trailofbits.com/2014/04/27/using-static-analysis... 2. http://security.coverity.com/blog/2014/Apr/on-detecting-hear... 3. http://www.grammatech.com/blog/finding-heartbleed-with-codes...
If you have immature projects sure run tools against it and some bugs will shake out. But if you want to find the next heartbleed a tool wont do it which is your mistaken conclusion.
The question then becomes how to cultivate and encourage more people to find vulns like this. Money seems like a good incentive for most, although Neel Mehta did it of his own volition. I dont know the answer to that question but things like googles project zero are exactly what I would try first.
dllthomas|11 years ago
kazinator|11 years ago
My point wasn't that only tools should be used; I put that in as an aside (wrapped in glaring parentheses!). If I hadn't, someone would have pointed it out for me in a reply: "Hey you fool, of course you can track whether people are really bug hunting and being honest about their activity, if they are using tools whose results are reproducible."
Of course tools only find things that they are designed to find. My point was not at all that tools should be used because they will find the next Heartbleed, but rather that you have some hope of tracking the progress of a security team that is applying tools.
The topic of submission isn't about what is the best way to find security holes, but about spending money on it. My view is that spending money wisely requires some definition of a "return on investment" and tracking of concrete goals. This is hard to do with security research (once tools-based approaches have been exhausted).
collingreene|11 years ago
I can't think of anything closer to "throwing money at FOSS" than something like the internet bug bounty. Google/Facebook/etc collected a pile of money and put it up for a bug bounty for software used by most of us on the internet. https://hackerone.com/ibb click through to the projects and look at all the bugs that have been rewarded. https://hackerone.com/internet and https://hackerone.com/sandbox are the coolest.
My interpretation of your general conclusion is: without quantification spending money/effort on security is not useful. I disagree with that because its the nature of the beast. Its useful to have people look through code and some weeks there will not be a lot of findings. Its absolutely okay for a status report to read "I tried this, thought think might work, investigated the way X works to ensure it doesn't do Y - 0 total findings".
What people to pay & how to know you are getting your moneys worth are not unsolvable problems. For example at the company I work with we hold yearly bake-offs giving different security consultants the same code to see what bugs they find, we then use the best 2 or 3. Thats an approximation sure, but it solves your what people to pay problem.
How to know if you are getting your moneys worth, this is harder and rubs against the essence of security/QA work. No one knows what lurks in randomCode.tar.gz. That is the whole point of the exercise. But apparently the world agrees its useful to have corporate application security teams to do some vetting of the code looking for vulns, more useful that nothing at least. More useful than tools? Well thats a weird comparison because you likely need security people (or engineers with a bit of security background at least) to run some tools. I think tools vs people is a different debate but I would bet on people even at an equal cost point.
I agree quantification of security research is hard, I disagree that because we can't quantify something it is not useful.