For people who are curious before/after reading it, this is from 2019, there are updates the io_uring interface in Linux, quite a few actually, here's a changelog of liburing kind of the defacto library for io_uring.
https://github.com/axboe/liburing/compare/liburing-0.4...lib...
Actually, io_uring has received around 500 commits just in 2022.
Here's a list of interesting feature additions(this year),
That last line is interesting to me, one of the (admittedly few) things Windows had over Linux when it comes to being able to write extremely high performant software in an easy way was IOCP (IO completion ports), epoll on Linux was awkward to use and hard to get right.
I’ve never managed to do a comparison of io_uring vs IOCP but I guess there was some benefit here, I left that job before io_uring was mature.
Would anyone be willing to share a comparison of the two systems?
nice , it seems a lot of improvements going in recent kernels.
Would be nice if it is fully adopted by python asyncio and js (PR for node was 4 years already , it seems stucked there)
I did this years before io_uring, circa 2006. Working on a Linux-based networking startup called Zeugma Systems, I implemented a kernel-based logging system for a multi-process, multi-node distributed application. I started on Linux 2.6.14. When that startup folded, I think we were on 2.6.27.
The logging system was implemented in a module which was inserted into the kernel and then used the calling thread to run a service. The module provided a device and some ioctls. Processes used the ioctls to attach circular buffers to the device, which was mapped into kernel space using get_user_pages.
Processes would just place messages into their circular buffers and update an index variable in the buffer header. The kernel would automatically pick up the message, without any system call. There was a wakeup ioctl to poke the kernel thread, which was used upon hitting a high water mark (buffer getting near full). This is the basic intuition behind io_uring.
The kernel thread collected messages from multiple buffers and sent them into several destination (files and sockets).
I do not have most of this code, but some of it survived, including a kernel mutex and condition variable library featuring a function that lets you give up a mutex to wait on a condition variable, while also polling any mixture of kernel file and socket handles, with a timeout. This function at the core of the kernel thread's loop.
The nice thing was that when processes crashed, their buffer would not go away immediately. Of course their own address space would be gone, but the kernel's mapping of the shared buffer mapping would gracefully persist, until the thread emptied the buffer; everything put into the buffer before the crash was safe. Empty buffers belonging to processes that had died would then be cleaned away.
I had a utility program that would list the buffers and the PIDs of their processes, and provide stats, like outstanding bytes, and is that process still alive.
(The one inefficiency in logging was that log messages need time stamps. Depending on where you get a time stamp from, that requires a trip to the kernel. I can't remember what I did about that.)
A bit of a difficulty in the whole approach is that I wasn't getting a linear mapping of the user pages in the kernel. So I wrote the grotty C code (kernel side) to pull the messages correctly from a scrambled buffer whose pages are out of order, without making an extra copy.
I find myself wondering what a truly high performance WebSocket server would look like, and if it would require loading a custom kernel module. Consider the worst-case, 1-msg-in N-msg-out for N "connections" (aka fan-out ratio). My understanding is that at the physical level subnets time slice a shared, serialized medium. The units there are a network frame (either wifi or ethernet). These frames are organized into IP and then TCP and finally give your process a "connection" from which data comes and into which data goes. WebSockets, to me, simply make the TCP socket abstraction accessible to browsers, with some extra setup cost but no runtime cost.
To be honest, ordinary "naive" programming methods are good enough to run a sizable single node WebSocket server. I'd be curious how much performance you can get out of a single server or, perhaps more broadly useful, a single core in a single Linux VPS, using different languages and relatively esoteric techniques like this.
> To find the index of an event, the application must mask the current tail index with the size mask of the ring. This
commonly looks something like the below:
unsigned head;
head = cqring→head;
read_barrier();
if (head != cqring→tail) {
struct io_uring_cqe *cqe;
unsigned index;
index = head & (cqring→mask);
cqe = &cqring→cqes[index];
/* process completed cqe here */
...
/* we've now consumed this entry */
head++;
}
cqring→head = head;
write_barrier();
Am I misunderstanding, or is "current tail index" supposed to be "current head index?"
[+] [-] minraws|3 years ago|reply
Actually, io_uring has received around 500 commits just in 2022. Here's a list of interesting feature additions(this year),
- https://www.phoronix.com/news/Linux-LPC2022-io_uring_spawn
- https://www.phoronix.com/news/Linux-520-XFS-uring-Async-Buff
- https://www.phoronix.com/news/Linux-5.20-IO_uring-ZC-Send
- https://www.phoronix.com/news/Linux-5.19-IO_uring
io_uring like API was also adopted by Windows recently. Lots of fun.
[+] [-] dijit|3 years ago|reply
I’ve never managed to do a comparison of io_uring vs IOCP but I guess there was some benefit here, I left that job before io_uring was mature.
Would anyone be willing to share a comparison of the two systems?
[+] [-] v3ss0n|3 years ago|reply
[+] [-] kazinator|3 years ago|reply
The logging system was implemented in a module which was inserted into the kernel and then used the calling thread to run a service. The module provided a device and some ioctls. Processes used the ioctls to attach circular buffers to the device, which was mapped into kernel space using get_user_pages.
Processes would just place messages into their circular buffers and update an index variable in the buffer header. The kernel would automatically pick up the message, without any system call. There was a wakeup ioctl to poke the kernel thread, which was used upon hitting a high water mark (buffer getting near full). This is the basic intuition behind io_uring.
The kernel thread collected messages from multiple buffers and sent them into several destination (files and sockets).
I do not have most of this code, but some of it survived, including a kernel mutex and condition variable library featuring a function that lets you give up a mutex to wait on a condition variable, while also polling any mixture of kernel file and socket handles, with a timeout. This function at the core of the kernel thread's loop.
The nice thing was that when processes crashed, their buffer would not go away immediately. Of course their own address space would be gone, but the kernel's mapping of the shared buffer mapping would gracefully persist, until the thread emptied the buffer; everything put into the buffer before the crash was safe. Empty buffers belonging to processes that had died would then be cleaned away.
I had a utility program that would list the buffers and the PIDs of their processes, and provide stats, like outstanding bytes, and is that process still alive.
(The one inefficiency in logging was that log messages need time stamps. Depending on where you get a time stamp from, that requires a trip to the kernel. I can't remember what I did about that.)
A bit of a difficulty in the whole approach is that I wasn't getting a linear mapping of the user pages in the kernel. So I wrote the grotty C code (kernel side) to pull the messages correctly from a scrambled buffer whose pages are out of order, without making an extra copy.
[+] [-] javajosh|3 years ago|reply
To be honest, ordinary "naive" programming methods are good enough to run a sizable single node WebSocket server. I'd be curious how much performance you can get out of a single server or, perhaps more broadly useful, a single core in a single Linux VPS, using different languages and relatively esoteric techniques like this.
[+] [-] moonchild|3 years ago|reply
[+] [-] hinkley|3 years ago|reply
Did you implement a write barrier on this to ensure the compiler or CPU doesn’t run these two writes out of order?
[+] [-] cash22|3 years ago|reply
> To find the index of an event, the application must mask the current tail index with the size mask of the ring. This commonly looks something like the below:
Am I misunderstanding, or is "current tail index" supposed to be "current head index?"[+] [-] acquacow|3 years ago|reply
[+] [-] espoal|3 years ago|reply
If you plan on using it, or just want to learn more about it, check this out:
https://github.com/espoal/awesome-iouring
[+] [-] russdill|3 years ago|reply
[+] [-] ephaeton|3 years ago|reply
[+] [-] v3ss0n|3 years ago|reply
[+] [-] ioquatix|3 years ago|reply
[+] [-] rektide|3 years ago|reply
https://github.com/libuv/libuv/issues/1947#issuecomment-4852...
[+] [-] yxhuvud|3 years ago|reply
[+] [-] Zababa|3 years ago|reply
[+] [-] Sirened|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]