(no title)
ygoldfeld | 1 year ago
For what it is worth at this time - obviously acting on the following statement will require some level of trust -
It is very much ready to use with boost.asio. (I know that, because I myself use boost.asio religiously. If it were not compatible with it, I'd pretty much have to not use Flow-IPC myself.) Though, it could (fairly easily) gain a number of wrapper classes that would turn our stuff into actual boost.asio I/O objects; then it'd be even more straightforward.
Topic is covered here:
https://flow-ipc.github.io/doc/flow-ipc/versions/main/genera...
There's even the little section entitled, "I'm a boost.asio user. Can't I just give your constructor my io_context, and then you'll place the completion handler directly onto it?"
To summarize, though...
-1- You can have Flow-IPC create background threads as-needed and ping your completion handler (e.g., "message received") from such threads.
-2- You can have it not create any background threads, instead asking you to .async_wait() (via boost.asio, most easily; but also manually with poll() or whatever you want) whenever it needs internally to async-await something. Your own completion handler (e.g., handle just-received message M) shall execute synchronously at only predictable points, in non-blocking fashion.
-3- Direct integration with boost.asio - meaning ipc::transport::Channel (e.g.) would take an io_context/executor/whatever in its ctor, and .async_X(F) would indeed post F onto that io_context/executor/whatever = essentially syntactic sugar = a TODO. (I'd best file an Issue, I just remembered.)
The perf_demo (partially recreated in the blog-post) integrates into a single-threaded boost.asio io_context, using technique #2 above. In the source code snippets in the blog, we avoided anything asynchronous just to keep it focused for the max # of readers (hopefully).
OnlyMortal|1 year ago
I’ve been hit by Cephfs using some version and my own code using another.
The fixes were simple though.
Edit: as for performance, I’d not focus on that too much. It’ll depend on circumstances the end user has. Myself, I’d measure the interfaces with stack based timings and dump to a JSON file at exit. Graphs under various loads and an a/b comparison.
As an example, on a dedupe system I measure LZO was better for performance than LZ4. HPE rack units with spinning rust disks.
Edit 2: I’ve forwarded your GitHub to my work account. I’ll offer the research to a colleague (Jira backlog) to look at when “someone” wants our new system to be faster. We have a boost asio solution I wrote that works - local unix domain sockets. Hitachi NAS.