Story time.. I coded a similar thing when I started working after graduation for a DTV software company; we needed to have an IPC on Linux and I whipped up some very crude equivalent of protobuf (which I didn’t know about), based on an RLE lib i stumbled upon and without any form of discovery.
It was circa 2009, I had only been exposed to plain text protocols and didn’t know about json. In hindsight, we might’ve been better using standard dbus, of protobuf, but I was a rookie and it provided the performance we needed (for DTV metadata).
I’m happy to see that these can still thrive, and I just recently figured out that discovery is a net multiplier in these projects; doing this really proved me that any problem is solvable if you can have some time to think about it and can prototype.
I long for these moments now, I feel like nearly all computing issues have been solved and we are now just plumbers, connecting libraries and software modules through config files instead of building things.
Only if you choose to be a plumber. You can equally write new things if you want.
Also I think "any problem is solvable" needs some qualification - there are a ton of problems that have yet to be solved or are super complicated. I still haven't figured out how unbounded model checking works for example.
As someone having to debug issues on OpenWRT derivatives, I wish I had a time machine to tell the inventors of Unix to never add other forms of IPC than pipes.
It's all a big pile of stateful daemons notifying other daemons with a billion race conditions and zero debugging capabilities, like a parody of how not to create reliable systems. When you have shell scripts parsing JSON messages, you know it's over.
Pipes and named FIFOs are easy and great. I say this after implementing various IPC methods (unix domain sockets with fd passing, POSIX message queues, 0MQ, XML RPC, local TCP sockets, just to name a few). Use a simple line oriented protocol. If you are passing complex data through your IPC, you know it's time for files. Shared memory is another way to do IPC but hopefully you have robust method of detecting the liveliness of your local processes and you want to give up the Unix file paradigm.
Not to be confused with the (dead) ubus project[1] by the suckless folks. (Mostly I just want to write down the link to that one, because I keep losing it.)
It seems to have some similarities to RouterOS' IPC and of course they have a similar role and environment. I'm curious if anyone who has looked into both of them in detail has any thoughts?
There is no real case to be made against the idea of memory safety, but there is a case to be made for C.
You can write C code such that its unit tested, fuzzed, statically analyzed and reviewed, and often embedded code has one or two specific jobs.
I would have loved to see Zig here, as it makes all of the above (testing, analysis, code clarity for review) easier, but its also not memory safe necessarily.
You could write it in D with @safe (which is SafeD or whatever), which is memory safe, but thats not a very popular language.
You could use Ada, but again, thats not as easy to find devs for as C is.
You could use Go or Ocaml, though go isn't really memory safe and you can easily cause data races in goroutines, and ocaml has a runtime afaik so thats out of the question for low memory devices.
You could use Rust, assuming it had existed back when they started it, and you would get memory safety, but also an entire kitchen sink of useless garbage (like C++). Youd also have to shell out to unsafe{} in a lot of places, unless you use a crate for it, which will then do unsafe{} for you.
So chances are, whatever you do, C is a pretty sane choice, or maybe C++ if you want RAII to at least make your resource management and lifetimes easy to handle.
I dont think you deserve to be downvoted, since this is an interesting discussion to have. However, I think it would have been helpful for the discussion if you had outlined which language youd suggest and why.
To Steelman your argument, I would say you think that C is unsafe to such a degree that even a for loop is UB most of the time (integer overflow as UB), no real way to check array bounds, no real way to catch off-by-one and similarly stupid simple errors, use after free, egc. and that the entire ecosystem relies on raw pointers and macros, and its a shitshow. I think your point would have been to suggest Rust, as it fixes all these issues, while bringing along a stronger type system and a better toolchain.
A decent question, I've wondered about Rust programs on openwrt but at the moment they're just too big! Linking the stdlib into every binary doesn't help, and the code is generally larger than the C equivalent. It doesn't seem unsolvable though, I'm hopeful. no_std rust binaries can be near competitive with C.
Binary size is sometimes a big deal. Two decades ago I investigated using C++ for an "embedded" linux ssh server, but decided the 30kB overhead was too large (target was a 4MB laptop). The server ended up being used in OpenWRT and other places, I'm curious if it would have happened if I'd gone with C++ instead.
Considering the repo dates back to 2010 (predating Rust by 4+ years), there probably wasn’t a better option that could run on the embedded devices that OpenWRT targets.
From working with OpenWRT for years on embedded systems where sometimes 128 bytes made a difference: this is not a useful question.
In the embedded space, you use small effective languages like assembly and C. You either design for explicit memory use entirely, or you systematically test for memory usage and waste - especially if you're delivering industrial applications. The so-called "memory safe" systems tend to be very expensive for memory usage and space, and not worth the investment. Usually, anyway. This may change in the future but it's a bad change if it comes with additional power usage requirements and the environment also requires minimal power use.
[+] [-] xgbi|2 years ago|reply
It was circa 2009, I had only been exposed to plain text protocols and didn’t know about json. In hindsight, we might’ve been better using standard dbus, of protobuf, but I was a rookie and it provided the performance we needed (for DTV metadata).
I’m happy to see that these can still thrive, and I just recently figured out that discovery is a net multiplier in these projects; doing this really proved me that any problem is solvable if you can have some time to think about it and can prototype.
I long for these moments now, I feel like nearly all computing issues have been solved and we are now just plumbers, connecting libraries and software modules through config files instead of building things.
[+] [-] IshKebab|2 years ago|reply
Also I think "any problem is solvable" needs some qualification - there are a ton of problems that have yet to be solved or are super complicated. I still haven't figured out how unbounded model checking works for example.
[+] [-] PhilipRoman|2 years ago|reply
It's all a big pile of stateful daemons notifying other daemons with a billion race conditions and zero debugging capabilities, like a parody of how not to create reliable systems. When you have shell scripts parsing JSON messages, you know it's over.
[+] [-] codehero|2 years ago|reply
[+] [-] westurner|2 years ago|reply
Newline-delimited != SOTA
https://en.wikipedia.org/wiki/Apache_Arrow :
> Arrow allows for zero-copy reads and fast data access and interchange without serialization overhead between these languages and systems
JSON lines formatted messages can be UTF-8 JSON-LD: https://jsonlines.org/
Linux networking is now all built on eBPF.
[+] [-] hydroid7|2 years ago|reply
[+] [-] the8472|2 years ago|reply
[+] [-] mananaysiempre|2 years ago|reply
[1] https://web.archive.org/web/20131209010702/http://unixbus.or...
[+] [-] attah_|2 years ago|reply
iw event | awk '/new station/ {print $4}' | xargs -n 1 sh -c 'ubus send new_station {\"mac\":\"$1\"}' _
(Yes, this contains silly hacks to work around busybox limitations)
[+] [-] phoronixrly|2 years ago|reply
[+] [-] fatfingerd|2 years ago|reply
https://news.ycombinator.com/item?id=33904105
[+] [-] peter_d_sherman|2 years ago|reply
https://git.openwrt.org/?p=project/ubus.git;a=tree;h=refs/he...
[+] [-] fulafel|2 years ago|reply
[+] [-] lionkor|2 years ago|reply
You can write C code such that its unit tested, fuzzed, statically analyzed and reviewed, and often embedded code has one or two specific jobs.
I would have loved to see Zig here, as it makes all of the above (testing, analysis, code clarity for review) easier, but its also not memory safe necessarily.
You could write it in D with @safe (which is SafeD or whatever), which is memory safe, but thats not a very popular language.
You could use Ada, but again, thats not as easy to find devs for as C is.
You could use Go or Ocaml, though go isn't really memory safe and you can easily cause data races in goroutines, and ocaml has a runtime afaik so thats out of the question for low memory devices.
You could use Rust, assuming it had existed back when they started it, and you would get memory safety, but also an entire kitchen sink of useless garbage (like C++). Youd also have to shell out to unsafe{} in a lot of places, unless you use a crate for it, which will then do unsafe{} for you.
So chances are, whatever you do, C is a pretty sane choice, or maybe C++ if you want RAII to at least make your resource management and lifetimes easy to handle.
I dont think you deserve to be downvoted, since this is an interesting discussion to have. However, I think it would have been helpful for the discussion if you had outlined which language youd suggest and why.
To Steelman your argument, I would say you think that C is unsafe to such a degree that even a for loop is UB most of the time (integer overflow as UB), no real way to check array bounds, no real way to catch off-by-one and similarly stupid simple errors, use after free, egc. and that the entire ecosystem relies on raw pointers and macros, and its a shitshow. I think your point would have been to suggest Rust, as it fixes all these issues, while bringing along a stronger type system and a better toolchain.
[+] [-] mkj|2 years ago|reply
Binary size is sometimes a big deal. Two decades ago I investigated using C++ for an "embedded" linux ssh server, but decided the 30kB overhead was too large (target was a 4MB laptop). The server ended up being used in OpenWRT and other places, I'm curious if it would have happened if I'd gone with C++ instead.
[+] [-] btgeekboy|2 years ago|reply
[+] [-] teunispeters|2 years ago|reply
In the embedded space, you use small effective languages like assembly and C. You either design for explicit memory use entirely, or you systematically test for memory usage and waste - especially if you're delivering industrial applications. The so-called "memory safe" systems tend to be very expensive for memory usage and space, and not worth the investment. Usually, anyway. This may change in the future but it's a bad change if it comes with additional power usage requirements and the environment also requires minimal power use.
[+] [-] Roark66|2 years ago|reply
I don't know if we now have a better alternative that is similar in speed, ram use and binary size.
If we do I'd would read a discussion on which one to use with interest.
[+] [-] unknown|2 years ago|reply
[deleted]