(no title)
dsign
|
3 days ago
I've wondered for a long time if we would have been able to make do without protected mode (or hardware protection in general) if user code was verified/compiled at load, e.g. the way the JVM or .NET do it...Could the shift on transistor budget have been used to offset any performance losses?
st_goliath|3 days ago
https://en.wikipedia.org/wiki/Singularity_(operating_system)
Managed code, the properties of their C# derived programming language, static analysis and verification were used rather than hardware exception handling.
avadodin|3 days ago
I think hardware protection is usually easier to sell but it isn't when it is slower or more expensive than the alternative.
alnwlsn|3 days ago
edit: I missed it was linked on the above page
rwmj|3 days ago
mananaysiempre|3 days ago
rwallace|3 days ago
Basically, you have to have out of order/speculative execution if you ultimately want the best performance on general/integer workloads. And once you have that, timing information is going to leak from one process into another, and that timing information can be used to infer the contents of memory. As far as I can see, there is no way to block this in software. No substitute for the CPU knowing 'that page should not be accessible to this process, activate timing leak mitigation'.
zozbot234|3 days ago
A far greater problem is that until very recently, practical memory safety required the use of inefficient GC. Even a largely memory-safe language like Rust actually requires runtime memory protection unless stack depth requirements can be fully determined at compile time (which they generally can't, especially if separately-provided program modules are involved).
ttflee|3 days ago