Emitting 'secure' code almost always means emitting 'slower' code, and one of the few things compilers are assessed on is the performance of the code they generate.
Compilers are built as a series of transformation passes. Normalisation is a big deal - if you can simplify N different patterns to the same thing, you only have to match that one canonical form later in the pipeline.
So if one pass makes code slower/secure, later passes are apt to undo that transform and/or to miss other optimisations because the code no longer looks as expected.
So while it is useful to know various make-it-secure transforms, which this book seems to cover, it's not at all obvious how to implement them without collateral damage.
On a final note, compiler transforms are really easy to get wrong, so one should expect the implementation of these guards to be somewhat buggy, and those bugs themselves may introduce vulnerabilities.
1960's systems were already taking security first approach and the industry would have kept down that route if it wasn't for UNIX and C adoption.
IBM even did their RISC research in PL.8 taking into consideration safety and pluggable compiler infrastructure, similar to what people nowadays know from LLVM approach.
Some would say that security measures in the car industry also slow drivers down and are a nuisance.
The vast majority of businesses choose speed over security and avoid investing in security since they can offload the cost of incidents to their users. One of the main reasons such "more secure tools" projects are interesting for users is that they provide an easy and cheap avenue towards claiming an effort towards security was made and avoiding liability. On one hand, such tools actually help make things secure, on the other hand, speed and ease of use (not security) being the top priorities, the effect is probably limited. People who care much more about security than average would not start a new project in C/C++ to begin with and where legacy code is involved, dealing with it is hard enough already without trying to "make it secure".
The only way to really improve the level of security in the industry is to assign responsibility and damages to those who fail to implement it. So far, it seems all market participants are content with 90% of security concerns being addressed by security theater.
Returns in digital stores, increasing visibility of how it actually costs in real money to fix those issues, warranty clauses in consulting gigs (usually free of charge), and introduction of cyber security laws like in Germany [0].
> The only way to really improve the level of security in the industry is to assign responsibility and damages to those who fail to implement it.
This is the punishment approach. What it inevitably leads to is denial, coverup, unwillingness to innovate, and not fixing problems because fixing them is an implicit admission of fault.
The better way is for no-fault, encouraging disclosure and openness about bugs, and collaboration in fixing them.
> The vast majority of businesses choose speed over security
I would add that the vast majority of businesses also choose features over speed.
In some cases they pay lip service to speed, for instance by choosing C++, but pay zero attention to actual speed, because they end up writing in a pointer fest RAII style that destroys memory locality and miss the cache all the time. Compared to that, even Electron doesn’t look too unreasonable.
> The vast majority of businesses choose speed over security
The D compiler would be faster if we turned off array bounds checking and assert checking. But we leave those security features turned on for release builds.
This is pretty great; they've done the work to produce useful capsule summaries of a bunch of memory safety topics (like forward- and backward-edge CFI, JOP, and PAC). Looking forward to seeing how far they can take it. The assembler snippets are useful and could be fleshed out more.
That was an excellent read. I look forward to enjoying their section on JIT compiler vulnerabilities, a whole fascinating topic in itself, when it is completed.
Compilers are an interest of mine so I'll read the article later, but I'm curious whether this is talking about C variety compilers which are generally unsafe, or compilers for managed languages which should never emit code allowing attacks (for some definition of 'never'). Which of these is this article/book discussing?
JonChesterfield|3 years ago
Emitting 'secure' code almost always means emitting 'slower' code, and one of the few things compilers are assessed on is the performance of the code they generate.
Compilers are built as a series of transformation passes. Normalisation is a big deal - if you can simplify N different patterns to the same thing, you only have to match that one canonical form later in the pipeline.
So if one pass makes code slower/secure, later passes are apt to undo that transform and/or to miss other optimisations because the code no longer looks as expected.
So while it is useful to know various make-it-secure transforms, which this book seems to cover, it's not at all obvious how to implement them without collateral damage.
On a final note, compiler transforms are really easy to get wrong, so one should expect the implementation of these guards to be somewhat buggy, and those bugs themselves may introduce vulnerabilities.
pjmlp|3 years ago
IBM even did their RISC research in PL.8 taking into consideration safety and pluggable compiler infrastructure, similar to what people nowadays know from LLVM approach.
Some would say that security measures in the car industry also slow drivers down and are a nuisance.
https://en.m.wikipedia.org/wiki/Unsafe_at_Any_Speed
staunton|3 years ago
The only way to really improve the level of security in the industry is to assign responsibility and damages to those who fail to implement it. So far, it seems all market participants are content with 90% of security concerns being addressed by security theater.
pjmlp|3 years ago
Returns in digital stores, increasing visibility of how it actually costs in real money to fix those issues, warranty clauses in consulting gigs (usually free of charge), and introduction of cyber security laws like in Germany [0].
[0] - https://www.bsi.bund.de/EN/Das-BSI/Auftrag/Gesetze-und-Veror...
WalterBright|3 years ago
This is the punishment approach. What it inevitably leads to is denial, coverup, unwillingness to innovate, and not fixing problems because fixing them is an implicit admission of fault.
The better way is for no-fault, encouraging disclosure and openness about bugs, and collaboration in fixing them.
loup-vaillant|3 years ago
I would add that the vast majority of businesses also choose features over speed.
In some cases they pay lip service to speed, for instance by choosing C++, but pay zero attention to actual speed, because they end up writing in a pointer fest RAII style that destroys memory locality and miss the cache all the time. Compared to that, even Electron doesn’t look too unreasonable.
WalterBright|3 years ago
The D compiler would be faster if we turned off array bounds checking and assert checking. But we leave those security features turned on for release builds.
ngneer|3 years ago
tptacek|3 years ago
vonimo|3 years ago
_a_a_a_|3 years ago
tptacek|3 years ago
eimrine|3 years ago
hummus_bae|3 years ago