top | item 32495165

(no title)

garphunkle | 3 years ago

I've been working in embedded for 5 years and am curious how rust could solve my biggest headaches:

* Managing build configurations - I use CMake to build a single application for multiple hardware platforms. This is accomplished almost exclusively through linking, e.g., a single header file "ble-ncp-driver.h" with multiple "ble-ncp-driver.cpp" files for each target platform. I call this the "fat driver" approach which has proven to be easier to work with than creating a UART abstraction or ADC abstraction. Does rust's package system address this?

* Automated device testing - fluid leaks are similar to bugs in software. They are systemic in nature and cannot be easily understood through static analysis. We spent equal time maintaining a test bench as product development.

* Preemptive operating systems - more trouble than they are worth. Often, devs get bogged down writing messages queues to pass items between task contexts and timing analysis requires detailed event tracing.

Given I don't see teams struggle with memory ownership (easy to do if you never, ever malloc), what else can rust bring to embedded dev?

discuss

order

quake|3 years ago

I've found Cargo more than up to the task of managing build configurations, and doesn't require monkeying around with CMake scripts or Makefiles. It was pointed out in another comment but you can gate features and crates based on the target you're compiling to. Cargo also supports custom build profiles so you can also pick and choose what you want even if it's all on the same target.

Creating a heap in Rust on a cortex M is safe and cheap-ish with a crate supported by the rust-lang developers. Much easier than implementing your own free() method on a memory pool.

I think you would like rtic. Not a pre-emptive rtos, but a way to manage context between ISR's without relying on some kind of module or global variable that can get corrupted by multiple accessors. Very minimal overhead compared to FreeRTOS

lifeinthevoid|3 years ago

It's not just ownership, it's memory safety. I've worked in embedded development and I can't say I've never seen a segfault occur or worse, without an MMU, just random crashes and buggy behavior due to memory corruptions.

markjgx|3 years ago

> Managing build configurations...

In terms of package management, you can apply rules to what crates you want to include; including specific platform constraints.

  [target.'cfg(target_os = "linux")'.dependencies]
  nix = "0.5"
On the code side it's pretty much the same as C++. You have a module that defines an interface and per-platform implementations that are included depending on a "configuration conditional check" #[cfg(target_os = "linux")] macro.

https://github.com/tokio-rs/mio/blob/c6b5f13adf67483d927b176...

ecesena|3 years ago

For configurations, rust supports features. To me they look very similar to ifdefs in C, except they're managed directly by cargo and can be passed down into dependencies and modules.

You can decide how to use them, for example you can very much create "fat drivers".

If you want to see an example, here's how we build for dev vs release, on two different boards. Cargo makes it really smooth. https://github.com/solokeys/solo2/blob/main/runners/lpc55/Ma...

Similarly, for testing, one annoyance for us is that in theory the user should press a button for every action. We have a feature to disable that, just so we can run integration tests (either on PC or on device) more smoothly.

larve|3 years ago

C/C++ embedded developer here. I've never used Rust in embedded because I didn't really see the need. But I'm currently taking a retreat and I started playing with embedded Rust.

At work I had the same stance as you, and pushed against adding rust to our ecosystem (to avoid fragmenting what was 100% C++/python): - memory ownership bugs are not a problem (and even on the host, with unique_ptr and shared_ptr you can really get quite far) - C++ meta programming is really quite expressive to nip most bugs in the bud (say, writing to the wrong port, adding an i16 to an i32, or adding ms to us), - C++ meta programming is pretty good at building bigger abstraction, such as monadic tasks

Here's the main advantages I see and which convinced me to take it seriously.

- cargo for package management and building. It's extremely easy and "nice" to add packages, manage multiple configurations, build additional tools as part of the building, but run them on the host (say, a protocol parser generator etc...)

This is just huge. I basically almost never reused any code except copy pasting source from other projects or from the vendor lib straight into the project, because anything else was just too brittle, even with CMake. Most embedded projects I worked on had their own idiosyncratic build system based on make, and you had to relearn it every time.

- macros that are actually worth it. THis might be the most exciting thing. I often use patterns such as state machines and other formalisms, but the best I can do in C++ to make them nice to write is mix up some ugly ass macros with some templating, and it always ends up being a mess in the error messages. Rust gives you some really decent "lisp"-y metaprogramming.

- rust works equally well for the bare metal and the highest level scripting. That means that my projects won't end up being a mix of cmake + bash + python + C++, I can do everything in rust.

- the embedded code with an abstracted HAL looks REALLY nice. It's almost arduino-like, except this is actually the real thing. This is what my pairing partner and I came up with to control a SPI display:

    fn new(
        spim: spim::Spim<SPIM0>,
        timer: &'a mut hal::Timer<pac::TIMER0>,
        cs: gpio::Pin<gpio::Output<gpio::PushPull>>,
        rst: gpio::Pin<gpio::Output<gpio::PushPull>>,
        dc: gpio::Pin<gpio::Output<gpio::PushPull>>,
        busy: gpio::Pin<gpio::Input<gpio::PullUp>>,
    ) -> Display<'a> {
        return Display {
            spim,
            timer,
            cs,
            rst,
            dc,
            busy,
        };
    }

    fn init(&mut self) {
        self.reset();

        // BOOSTER SOFT START
        self.spi(&[0x06u8, 0x17, 0x17, 0x17]);

        // POWER ON
        self.spi(&[0x04]);

        // CHECK NOT BUSY
        self.check_not_busy();

Not only is every GPIO configuration typechecked, but the HAL layer takes care of initializing the abstracted HAL peripheral correctly for this chip architecture (nrf52833). This is of course not rocket science, but dang it just felt nice to have it work, and not have to wrestle with some mud-tier vendor HAL monstrosity.

- the community has reached critical mass, and I think it won't be too long until there are actually more rust developers on the market than C++ developers. Plus you kind of get the full-stack experience.

ostenning|3 years ago

Regarding preemptive operating systems a lightweight solution is to use rtic.rs which I have found pretty great for my time critical applications

ctrlmeta|3 years ago

> (easy to do if you never, ever malloc)

Not an embedded systems developer so an honest question. What do you do instead of malloc? Have a large array on stack and manage memory within that manually?

larve|3 years ago

I mostly allocate static areas in the BSS segment. That way, I know at compile time that I allocated my memory correctly, assuming that I have my stack under control.

Then I follow my two rules of embedded development: - no recursion - everything has to be O(1)

If I'm honest, I can't remember a project where I had to use even a pool allocator, which you would usually need if you were trying to do like, reorderable queues / lists / trees or so. I right now can't come up with a proper use case. If you do need to say, compute a variable length sequence of actions based on an incoming packet, then I would structure my code so that:

a) only the current action and the next action get computed (so that there is no pause in between executing them)

b) compute the next action when I switch over (basically with a ping-pong buffer)

c) verify real-time invariants

My most used structure is the ring buffer to smooth out "semi-realtime" stuff, and if the ring buffer overflows, well, the ring buffer overflows and it has to be dealt with. If I could have more memory I would just make the ring buffer bigger.

I'm not sure how clear this explanation is :)

dwheeler|3 years ago

It's quite common in hard real-time systems, especially in aeronautics, to only allow malloc on startup if it's allowed at all. There are many problems with malloc() and especially free() - they typically don't have any maximum latency guarantees, and even worse, what happens when you can't get memory (e.g., due to leakage or poor packing)?

In many systems this isn't a problem. The number of engines, flaps, etc., don't change at run-time :-). If they change, you're on the ground in maintenance mode and can reboot.

bigfishrunning|3 years ago

Very small embedded systems tend to have a lot of short-lived items on the stack, and anything that lives longer then a function call exists in static memory at a fixed address. Memory pools are pretty common as well. Small systems tend to avoid a tradition heap, because they can get into trouble pretty easily.

flyingfences|3 years ago

Function variables, with scope and lifetime limited to the call, get their place on the stack as usual. Everything else -- i.e., constants, static function variables, and anything with higher scope -- is allocated its own memory at compile time. We have no heap. We use no variable-length arrays or other, more dynamic data structures. Anything that needs to grow and shrink does so within its own fixed-length buffer.