top | item 43196239

(no title)

EtCepeyd | 1 year ago

This resonates a lot with me.

Distributed systems require insanely hard math at the bottom (paxos, raft, gossip, vector clocks, ...) It's not how the human brain works natively -- we can learn abstract thinking, but it's very hard. Embedded systems sometimes require the parallelization of some hot spots, but those are more like the exception AIUI, and you have a lot more control over things; everything is more local and sequential. Even data race free multi-threaded programming in modern C and C++ is incredibly annoying; I dislike dealing with both an explicit mesh of peers, and with a leaky abstraction that lies that threads are "symmetric" (as in SMP) while in reality there's a complicated messaging network underneath. Embedded is simpler, and it seems to require less that practitioners become advanced mathematicians for day to day work.

discuss

order

AlotOfReading|1 year ago

Most embedded systems are distributed systems these days, there's simply a cultural barrier that prevents most practitioners from fully grappling with that fact. A lot of systems I've worked on have benefited from copying ideas invented by distributed systems folks working on networking stuff 20 years ago.

DanielHB|1 year ago

I worked in an IoT platform that consisted of 3 embedded CPUs and one linux board. The kicker was that the linux board could only talk directly to one of the chips, but had to be capable of updating the software running on all of them.

That platform was parallelizable of up to 6 of its kind in a master-slave configuration (so the platform in the physical position 1 would assume the "master role" for a total of 18 embedded chips and 6 linux boards) on top of having optionally one more box with one more CPU in it for managing some other stuff and integrating with each of our clients hardware. Each client had a different integration, but at least they mostly integrated with us, not the other way around.

Yeah it was MUCH more complex than your average cloud. Of course the original designers didn't even bother to make a common network protocol for the messages, so each point of communication not only used a different binary format, they also used different wire formats (CAN bus, Modbus and ethernet).

But at least you didn't need to know kubernetes, just a bunch of custom stuff that wasn't well documented. Oh yeah and don't forget the boot loaders for each embedded CPU, we had to update the bootloaders so many times...

The only saving grace is that a lot of the system could rely on the literal physical security because you need to have physical access (and a crane) to reach most of the system. Pretty much only the linux boards had to have high security standards and that was not that complicated to lock down (besides maintaining a custom yocto distribution that is).

zootboy|1 year ago

Indeed. I've been building systems that orchestrate batteries and power sources. Turns out, it's a difficult problem to temporally align data points produced by separate components that don't share any sort of common clock source. Just take the latest power supply current reading and subtract the latest battery current reading to get load current? Oops, they don't line up, and now you get bizarre values (like negative load power) when there's a fast load transient.

Even more fun when multiple devices share a single communication bus, so you're basically guaranteed to not get temporally-aligned readings from all of the devices.

anitil|1 year ago

Yes even 'simple' devices these days will have devices (ADC/SPI etc) running in parallel often using DMA, multiple semi-independent clocks, possibly nested interrupts etc. Oh and the UART for some reason always, always has bugs, so hopefully you're using multiple levels of error checking.

motorest|1 year ago

> Distributed systems require insanely hard math at the bottom (paxos, raft, gossip, vector clocks, ...) It's not how the human brain works natively -- we can learn abstract thinking, but it's very hard.

I think this take is misguided. Most of the systems nowadays, specially those involving any sort of network cals, are already distributed systems. Yet, the amount of systems go even close to touching fancy consensus algorithms is very very limited. If you are in a position to design a system and you hear "Paxos" coming out of your mouth, that's the moment you need to step back and think about what you are doing. Odds are you are creating your own problems, and then blaming the tools.

yodsanklai|1 year ago

I remember when I prepared for system design interviews in FAANG, I was anxious I would get asked about Paxos (which I learned at school). Now that I'm working there, never heard about Paxos or fancy distributed algorithms. We rely on various high-level services for deployment, partitioning, monitoring, logging, service discovery, storage...

And Paxos doesn't require much maths. It's pretty tricky to consider all possible interleavings, but in term of maths, it's really basic discrete maths.

convolvatron|1 year ago

this is completely backwards. the tools may have some internal consistency guarantees, handle some classes of failures, etc. They are leaky abstractions that are partially correct. There were not collectively designed to handle all failures and consistent views no matter their composition.

From the other direction, Paxos, two generals, serializability, etc. are not hard concepts at all. Implementing custome solutions in this space _is_ hard and prone to error, but the foundations are simple and sound.

You seem to be claiming that you shouldn't need to understand the latter, that the former gives you everything you need. I would say that if you build systems using existing tools without even thinking about the latter, you're just signing up to handling preventable errors manually and treating this box that you own and black and inscrutable.

Thaxll|1 year ago

It does not requires any math because 99.9% of the time the issue is not in the low level implementation but in the business logic that the dev did.

No one goes to review the transaction engine of Postgress.

EtCepeyd|1 year ago

I tend to disagree.

- You work on postgres: you have to deal with the transaction engine's internals.

- You work in enterprise application intergration (EAI): you have ten legacy systems that inevitably don't all interoperate with any one specific transaction manager product. Thus, you have to build adapters, message routing and propagation, gateways, at-least-once-but-idempotent delivery, and similar stuff, yourself. SQL business logic will be part of it, but it will not solve the hard problems, and you still have to dig through multiple log files on multiple servers, hoping that you can rely on unique request IDs end-to-end (and that the timestamps across those multiple servers won't be overly contradictory).

In other words: same challenges at either end of the spectrum.

toast0|1 year ago

That's true, but you can do a lot of that once, and then get on with your life, if you build the right structures. I've gotten a huge amount of mileage from consensus to decide where to send reads/writes to, then everyone sends their reads/writes for the same piece of data to the same place; that place does the application logic where it's simple, and sends the result back. If you don't get the result back in time, bubble it up to the end-user application and it may retry or not, depending.

This is built upon a framework of the network is either working or the server team / ops team is paged and will be actively trying to figure it out. It doesn't work nearly as well if you work in an environment where the network is consistently slightly broken.

PaulDavisThe1st|1 year ago

> Even data race free multi-threaded programming in modern C and C++ is incredibly annoying; I dislike dealing with both an explicit mesh of peers, and with a leaky abstraction that lies that threads are "symmetric" (as in SMP) while in reality there's a complicated messaging network underneath.

If you're using traditional (p)threads-derived APIs to get work done on a message passing system, I'd say you're using the wrong API.

More likely, I don't understand what you might mean here.

EtCepeyd|1 year ago

Sorry, I figure I ended up spewing a bit of gibberish.

- By "explicit mesh of peers", I referred to atomics, and the modern (C11 and later) memory model. The memory model, for example as written up in the C11 and later standards, is impenetrable. While the atomics interfaces do resemble a messaging passing system between threads, and therefore seem to match the underlying hardware closely, they are discomforting because their foundation, the memory model, is in fact laid out in the PhD dissertation of Mark John Batty, "The C11 and C++11 Concurrency Model" -- 400+ pages! <https://www.cl.cam.ac.uk/~pes20/papers/topic.c11.group_abstr...>

- By "leaky abstraction", I mean the stronger posix threads / standard C threads interfaces. They are more intuitive and safer, but are more distant from the hardware, so people sometimes frown at them for being expensive.