top | item 43085758

(no title)

HALtheWise | 1 year ago

In addition to the other good answers, if the amount of state that's explicitly managed by software gets too large then it gets really expensive to save or restore that state. This happens, for example, when a syscall transfers control to the operating system. If the (many-MB) cache were software-managed, the OS would need to decide between flushing it all to main memory (expensive for quick syscalls) or leaving it in place and having OS code and data be uncached. Function calls between libraries have similar problems, how is a called function supposed to know which cache space is available for it to use? If you call the same function multiple times who's responsible for keeping its working data cached? For a 32MB L3 cache, flushing the entire cache to memory (as would be required when switching between processes) could take over a millisecond, let alone trying to manage caches shared by multiple cores.

discuss

order

No comments yet.