(no title)
frabert | 3 months ago
This primitive we're trying to introduce is meant to make up for this shortcoming without having to introduce additional rules in the standard.
frabert | 3 months ago
This primitive we're trying to introduce is meant to make up for this shortcoming without having to introduce additional rules in the standard.
fanf2|3 months ago
Meneth|3 months ago
foresto|3 months ago
frabert|3 months ago
Asooka|3 months ago
delta_p_delta_x|3 months ago
I would argue that given a certain ISA, it's probably easier to write an autocomplete extension for assembly targeting that ISA, rather than autocomplete for C, or goodness forbid, C++.
Likewise for structs, functions, jump targets, etc. One could probably set up snippets corresponding to different sorts of conditional execution—loops, if/else/while, switch, etc.
saagarjha|3 months ago
jfindper|3 months ago
I'm not exposed to this space very often, so maybe you or someone else could give me some context. "Sabotage" is a deliberate effort to ruin/hinder something. Are compiler engineers deliberately hindering the efforts of cryptographers? If yes... is there a reason why? Some long-running feud or something?
Or, through the course of their efforts to make compilers faster/etc, are cryptographers just getting the "short end of the stick" so to speak? Perhaps forgotten about because the number of cryptographers is dwarfed by the number of non-cryptographers? (Or any other explanation that I'm unaware of?)
chowells|3 months ago
stouset|3 months ago
CPUs love to do branch prediction to have computation already performed in the case where it guesses the branch correctly, but cryptographic code needs equal performance no matter the input.
When a programmer asks for some register or memory location to be zeroed, they generally just want to be able to use a zero in some later operation and so it doesn’t really matter that a previous value was really overwritten. When a cryptographer does, they generally are trying to make it impossible to read the previous value. And they want to be able to have some guarantee that it wasn’t implicitly copied somewhere else in the interim.
layer8|3 months ago
soulbadguy|3 months ago
A lot of software engineer are seeing this as compiler engineer only caring about performance as opposed to other aspect such as debuggability, safety, compile time and productivity etc... I think that's where the "sabotage" comes from. Basically the focus on performance at the detriment of other things.
My 2 cents : The core problem is programmers expecting invariant and properties not defined in the languange standard. The compiler only garanty things as defined in the standard, expecting anything else is problematic.
colmmacc|3 months ago
Yes, languages do lack good mechanisms to mark variables or sections as needing constant-time operation ... but compiler maintainers could have taken the view that that means all code should be compiled that way. Now instead we're marking data and section as "secret" so that they can be left unoptimized. But why not the other way around?
I understand how we get here; speed and size are trivial to measure and they each result in real-world cost savings. I don't think any maintainer could withstand this pressure. But it's still deliberate.
GhosT078|3 months ago
fooker|3 months ago
Any side effect is a side channel. There are always going to be side channels in real code running on real hardware.
Sure you can change your code, compiler, or, or even hardware to account for this but at it's core that is security by obscurity.