It appears that most of these attacks relied on exploiting the unfortunate design of C, which makes manual memory management the default and safe, managed memory the special case. It should be the reverse. Speed will always matter, but you don't have to use risky, manual memory mgt everywhere to get speed; you just need it in the few spots where it makes a difference.
In the majority of places in your code, manual memory management gives you no benefit but does expose you to a possible vulnerability if you make a mistake. If the default, lazy option were to let the well-tested runtime do the job for you, yet you could do a little extra work and get manual override wherever you wanted, and manual override everywhere brought you essentially back to C, I think we would have much safer code without a noticeable loss of performance.
Edit: I just realized in the shower that I was saying "memory management" when I meant direct "memory manipulation" more generally. I'm including arrays accessed by memory address rather than by bounds-checked index, pointer arithmetic, etc., not just malloc and free.
> It appears that most of these attacks relied on exploiting the unfortunate design of C, which makes manual memory management the default and safe, managed memory the special case. It should be the reverse. Speed will always matter, but you don't have to use risky, manual memory mgt everywhere to get speed; you just need it in the few spots where it makes a difference.
That's true, but I would claim something even stronger. Getting safety doesn't mean giving up manual memory management, as Rust shows (disclaimer: I work on Rust). You just have to need to have a language or system that enforces that you use safe manually-managed idioms. The idea that safety requires giving up performance (e.g. opting into a garbage collector, or even a runtime) is not true in most cases. In a properly designed system, safety doesn't even require opting into a runtime.
Your edit almost made me eat my comment, but I will go further: there's nothing risky about manual memory management, as long as the compiler and/or runtime prohibit you from accessing memory you didn't allocate, adding null pointer and boundary checks (which may need runtime support to get the size of an allocated memory block) where the compiler cannot guarantee that you only access memory you allocated.
The reverse situation, garbage-collecting systems that do nothing to prevent you from dereferencing null pointers or going out of bounds, is just as dangerous as C.
What this shows is that if you are using a machine connected to the internet, assume you have been rooted. If you are paranoid, do all of your surfing in a VM over Tor and reset that VM state after every launch.
Good idea but there's quite a lot of exploits in virtual machines that let them infect the host machine too. :P
So it's really pretty hard to stay safe.
You could always run your OS from a read-only CD?
At least you'd be none-infected on each reboot.
[+] [-] SiVal|12 years ago|reply
In the majority of places in your code, manual memory management gives you no benefit but does expose you to a possible vulnerability if you make a mistake. If the default, lazy option were to let the well-tested runtime do the job for you, yet you could do a little extra work and get manual override wherever you wanted, and manual override everywhere brought you essentially back to C, I think we would have much safer code without a noticeable loss of performance.
Edit: I just realized in the shower that I was saying "memory management" when I meant direct "memory manipulation" more generally. I'm including arrays accessed by memory address rather than by bounds-checked index, pointer arithmetic, etc., not just malloc and free.
[+] [-] pcwalton|12 years ago|reply
That's true, but I would claim something even stronger. Getting safety doesn't mean giving up manual memory management, as Rust shows (disclaimer: I work on Rust). You just have to need to have a language or system that enforces that you use safe manually-managed idioms. The idea that safety requires giving up performance (e.g. opting into a garbage collector, or even a runtime) is not true in most cases. In a properly designed system, safety doesn't even require opting into a runtime.
[+] [-] Someone|12 years ago|reply
The reverse situation, garbage-collecting systems that do nothing to prevent you from dereferencing null pointers or going out of bounds, is just as dangerous as C.
[+] [-] gioi|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] conorh|12 years ago|reply
http://www.pwn2own.com/2014/03/pwn2own-results-thursday-day-...
[+] [-] gioi|12 years ago|reply
[+] [-] rwg|12 years ago|reply
I'll bet it was ocspd they exploited. The CRL handling code in libsecurity is awful, and ocspd runs as root without a sandbox profile.
[+] [-] bfish510|12 years ago|reply
[+] [-] sitkack|12 years ago|reply
[+] [-] Ellipsis753|12 years ago|reply