top | item 45690726

(no title)

maxdamantus | 4 months ago

Just to clarify, I think the parent posts are talking about non-failing page faults, ie where the kernel just needs to update the mapping in the MMU after finding the existing page already in memory (minor page fault), or possibly reading it from filesystem/swap (major page fault).

SIGSEGV isn't raised during a typical page fault, only ones that are deemed to be due to invalid reads/writes.

When one of the parents talks about "no good programming model/OS api", they basically mean an async option that gives the power of threads; threading allows concurrency of page faults, so the kernel is able to perform concurrent reads against the underlying storage media.

Off the top of my head, a model I can think of for supporting concurrent mmap reads might involve a function:

  bool hint_read(void *data, size_t length);
When the caller is going to read various parts of an mmapped region, it can call `hint_read` multiple times beforehand to add regions into a queue. When the next page fault happens, instead of only reading the currently accessed page from disk, it can drain the `hint_read` queue for other pages concurrently. The `bool` return indicates whether the queue was full, so the caller stops making useless `hint_read` calls.

I'm not familiar with userfaultfd, so don't know if it relates to this functionality. The mechanism I came up with is still a bit clunky and probably sub-optimal compared to using io_uring or even `readv`, but these are alternatives to mmap.

discuss

order

vlovich123|4 months ago

You’ve actually understood my suggestion - thank you. Unfortunately I think hint_read inherently can’t work because it’s a race condition between the read and how long you access the page. And this race is inherent in any attempted solution that needs to be solved. Signals are also the wrong abstraction mechanism (and are slow and have all sorts of other problems).

You need something more complicated I think, like rseq and futex you have some shared data structure that both understand how to mutate atomically. You could literally use rseq to abort if the page isn’t in memory and then submit an io_uring task to get signaled when it gets paged in again but rseq is a bit too coarse (it’ll trigger on any preemption).

There’s a race condition starvation danger here (it gets evicted between when you get the signal and the sequence completes) but something like this conceptually could maybe be closer to working.

But yes it’s inherently difficult which is why it doesn’t exist but it is higher performance. And yes, this only makes sense for mmap not all allocations so SIGSEGV is irrelevant if looking at today’s kernels.

kragen|4 months ago

If you want accessing a particular page to cause a SIGSEGV so your custom fault handler gets invoked, you can just munmap it, converting that access from a "non-failing page fault" into one "deemed to be invalid". Then the mechanism I described would "allow[] concurrency of page faults, so the [userspace threading library] is able to perform concurrent reads against the underlying storage media". As long as you were aggressive enough about unmapping pages that none of your still-mapped pages got swapped out by the kernel. (Or you could use mlock(), maybe.)

I tried implementing your "hint_read" years ago in userspace in a search engine I wrote, by having a "readahead thread" read from pages before the main thread got to them. It made it slower, and I didn't know enough about the kernel to figure out why. I think I could probably make it work now, and Linux's mmap implementation has improved enormously since then, so maybe it would just work right away.

maxdamantus|4 months ago

The point about inducing segmentation faults is interesting and sounds like it could work to implement the `hint_read` mechanism. I guess it would mostly be a question of how performant userfaultfd or SIGSEGV handling is. In any case it will be sub-optimal to having it in the kernel's own fault handler, since each userfaultfd read or SIGSEGV callback is already a user-kernel-user switch, and it still needs to perform another system call to do the actual reads, and even more system calls to mmap the bits of memory again.

Presumably having fine-grained mmaps will be another source of overhead. Not to mention that each mmap requires another system call. Instead of a single fault or a single call to `readv`, you're doing many `mmap` calls.

> I tried implementing your "hint_read" years ago in userspace in a search engine I wrote, by having a "readahead thread" read from pages before the main thread got to them.

Yeah, doing it in another thread will also have quite a bit of overhead. You need some sort of synchronisation with the other thread, and ultimately the "readahead" thread will need to induce the disk reads through something other than a page fault to achieve concurrent reads, since within the readahead thread, the page faults are still synchronous, and they don't know what the future page faults will be.

It might help to do `readv` into dummy buffers to force the kernel to load the pages from disk to memory, so the subsequent page faults are minor instead of major. You're still not reducing the number of page faults though, and the total number of mode switches is increased.

Anyway, all of these workarounds are very complicated and will certainly be a lot more overhead than vectored IO, so I would recommend just doing that. The overall point is that using mmap isn't friendly to concurrent reads from disk like io_uring or `readv` is.

Major page faults are basically the same as synchronous read calls, but Golang read calls are asynchronous, so the OS thread can continue doing computation from other Goroutines.

Fundamentally, the benchmarks in this repository are broken because in the mmap case they never read any of the data [0], so there are basically no page faults anyway. With a well-written program, there shouldn't be a reason that mmap would be faster than IO, and vectored IO can obviously be faster in various cases.

[0] Eg, see here where the byte slice is assigned to `_` instead of being used: https://github.com/perbu/mmaps-in-go/blob/7e24f1542f28ef172b...

gpderetta|4 months ago

Are you reinventing madvise?

maxdamantus|4 months ago

I think the model I described is more precise than madvise. I think madvise would usually be called on large sequences of pages, which is why it has `MADV_RANDOM`, `MADV_SEQUENTIAL` etc. You're not specifying which memory/pages are about to be accessed, but the likely access pattern.

If you're just using mmap to read a file from start to finish, then the `hint_read` mechanism is indeed pointless, since multiple `hint_read` calls would do the same thing as a single `madvise(..., MADV_SEQUENTIAL)` call.

The point of `hint_read`, and indeed io_uring or `readv` is the program knows exactly what parts of the file it wants to read first, so it would be best if those are read concurrently, and preferably using a single system call or page fault (ie, one switch to kernel space).

I would expect the `hint_read` function to push to a queue in thread-local storage, so it shouldn't need a switch to kernel space. User/kernel space switches are slow, in the order of a couple of 10s of millions per second. This is why the vDSO exists, and why the libc buffers writes through `fwrite`/`println`/etc, because function calls within userspace can happen at rates of billions per second.