It sounds like what you're describing is one-time allocation, and I think it's a good idea. There is some work on making practical allocators that work this way [1]. For long-running programs, the allocator will run out of virtual address space and then you need something to resolve that -- either you do some form of garbage collection or you compromise on safety and just start reusing memory. This also doesn't address spatial safety.[1]: https://www.usenix.org/system/files/sec21summer_wickman.pdf
naasking|1 year ago
Or you destroy the current process after you marshall the data that should survive into a newly forked process. Side benefit: this means you get live upgrade support for free, because what is a live upgrade but migrating state to a new process with updated code?
kstrauser|1 year ago
Yeah, if you allow reuse then it wouldn't be a guarantee. I think it'd be closer to the effects of ASLR, where it's still possible to accidentally still break things, just vastly less likely.