I think one of the unstated problems with static analysis is just keeping track of the results. I know that when I started working with these tools, it was a huge PITA just dealing with the various output files.
That's why I created tools to convert the output from different tools into a common CSV format that can be databased and used to compare output from different tools, or from different versions of the code (e.g., after fixing errors reported by the tools).
Interesting approach. Where I work, we use Jenkins for collecting results. That way, for each build of our application we have a history of results for static analysis. Jenkins has good tools for storing and displaying this information, as well as the ability to show trends over time.
One of the things I like about this article is that it gives another example showing how formal methods catches deep errors unlikely to be caught with human review or testing:
"Overall, the error trace found by Infer has 61 steps, and the source of null, the call to X509 _ gmtime _ adj () goes five procedures deep and it eventually encounters a return of null at call-depth 4. "
I think the example Amazon gave for TLA+ was thirty-something steps. Most people's minds simply can't track 61 steps into software. Tests always have a coverage issue.
> Zoncolan catches more SEVs than either manual security reviews or bug bounty reports. We measured that 43.3% of the severe security bugs are detected via Zoncolan. At press time, Zoncolan's "action rate" is above 80% and we observed about 11 "missed bugs."
>. For the server-side, we have over 100-million lines of Hack code, which Zoncolan can process in less than 30 minutes. Additionally, we have 10s of millions of both mobile (Android and Objective C) code and backend C++ code
> All codebases see thousands of code modifications each day and our tools run on each code change. For Zoncolan, this can amount to analyzing one trillion lines of code (LOC) per day.
11 "missed bugs" on the 100 mm server-side lines of code per run, or ever?
Also, the main issue with static analysis tools tends to be not false negatives, but false positives. That is, they churn out tons and tons of alerts that aren't actually bugs. Some such systems alert so much that they aren't worth using.
> We also use the traditional security programs to measure missed bugs (that is, the vulnerabilities for which there is a Zoncolan category), but the tool failed to report them. To date, we have had about 11 missed bugs, some of them caused by a bug in the tool or incomplete modeling.
A missed bug is presumably one that the tool is designed to spot, but which it didn't during the period in which it has been running.
Is there something wrong with acm's load balancer or whatever? First managed to read to the end of the article, but to download the PDF showed "Oops!
This website is under heavy load." Now article page is under heavy load too.
Edit: It worked again right after I posted this comment.
wallstprog|6 years ago
That's why I created tools to convert the output from different tools into a common CSV format that can be databased and used to compare output from different tools, or from different versions of the code (e.g., after fixing errors reported by the tools).
These tools currently work with cppcheck, clang and PVS-Studio and can be found here: http://btorpey.github.io/blog/categories/static-analysis/
nh_99|6 years ago
nickpsecurity|6 years ago
"Overall, the error trace found by Infer has 61 steps, and the source of null, the call to X509 _ gmtime _ adj () goes five procedures deep and it eventually encounters a return of null at call-depth 4. "
I think the example Amazon gave for TLA+ was thirty-something steps. Most people's minds simply can't track 61 steps into software. Tests always have a coverage issue.
SanchoPanda|6 years ago
>. For the server-side, we have over 100-million lines of Hack code, which Zoncolan can process in less than 30 minutes. Additionally, we have 10s of millions of both mobile (Android and Objective C) code and backend C++ code
> All codebases see thousands of code modifications each day and our tools run on each code change. For Zoncolan, this can amount to analyzing one trillion lines of code (LOC) per day.
11 "missed bugs" on the 100 mm server-side lines of code per run, or ever?
m0zg|6 years ago
muglug|6 years ago
> We also use the traditional security programs to measure missed bugs (that is, the vulnerabilities for which there is a Zoncolan category), but the tool failed to report them. To date, we have had about 11 missed bugs, some of them caused by a bug in the tool or incomplete modeling.
A missed bug is presumably one that the tool is designed to spot, but which it didn't during the period in which it has been running.
sanxiyn|6 years ago
dhekir|6 years ago
Hopefully Software Heritage (https://www.softwareheritage.org) will help with that.
mhxion|6 years ago
Edit: It worked again right after I posted this comment.
sjtindell|6 years ago