top | item 42665598

(no title)

giomasce | 1 year ago

How does that interact with cache? Does accessing the ring buffer using the second set of mapped pages ends up using the same cache line, or is it a fresh request to main memory? If it's the latter, I guess that's has good chances of making your circular buffer slower, depending on how big it is, how does your cache work and how much cache pressure you experience. I don't think I know enough about actual caches to say whether that's probable or not.

discuss

order

junon|1 year ago

Same cache-line. CPU caches come after virtual memory translations / TLB lookups. Memory caches work on physical addresses, not linear (virtual) addresses.

Memory access -> TLB cache lookup -> PT lookup (if TLB miss) -> L1 cache check (depending on PT flags) -> L2 cache check (depending on PT flags, if L1 misses) -> ... -> main memory fetch, to boil it down simply.

CPUs would be ridiculously slow if that wasn't the case. Also upon thinking about it a bit more, I have no idea how it'd even work if it was the other way around. (EDIT: To be clear, I meant if main memory cache was hit first followed by the MMU - someone correctly mentioned VIVT caches which aren't what I meant :D)

saagarjha|1 year ago

VIVT caches exist, though.