top | item 43195702

Distributed systems programming has stalled

287 points| shadaj | 1 year ago |shadaj.me

217 comments

order

bsnnkv|1 year ago

Last month I switched from a role working on a distributed system (FAANG) to a role working on embedded software which runs on cards in data center racks.

I was in my last role for a year, and 90%+ of my time was spent investigating things that went "missing" at one of many failure points between one of the many distributed components.

I wrote less than 200 lines of code that year and I experienced the highest level of burnout in my professional career.

The technical aspect that contributed the most to this burnout was both the lack of observability tooling and the lack of organizational desire to invest in it. Whenever I would bring up this gap I would be told that we can't spend time/money and wait for people to create "magic tools".

So far the culture in my new embedded (Rust, fwiw) position is the complete opposite. If you're burnt out working on distributed systems and you care about some of the same things that I do, it's worth giving embedded software dev a shot.

alabastervlog|1 year ago

I've found the rush to distributed computing when it's not strictly necessary kinda baffling. The costs in complexity are extreme. I can't imagine the median company doing this stuff is actually getting either better uptime or performance out of it—sure, it maybe recovers better if something breaks, maybe if you did everything right and regularly test that stuff (approximately nobody does though), but there's also so very much more crap that can break in the first place.

Plus: far worse performance ("but it scales smoothly" OK but your max probable scale, which I'll admit does seem high on paper if you've not done much of this stuff before, can fit on one mid-size server, you've just forgotten how powerful computers are because you've been in cloud-land too long...) and crazy-high costs for related hardware(-equivalents), resources, and services.

All because we're afraid to shell into an actual server and tail a log, I guess? I don't know what else it could be aside from some allergy to doing things the "old way"? I dunno man, seems way simpler and less likely to waste my whole day trying to figure out why, in fact, the logs I need weren't fucking collected in the first place, or got buried some damn corner of our Cloud I'll never find without writing a 20-line "log query" in some awful language I never use for anything else, in some shitty web dashboard.

Fewer, or cheaper, personnel? I've never seen cloud transitions do anything but the opposite.

It's like the whole industry went collectively insane at the same time.

[EDIT] Oh, and I forgot, for everything you gain in cloud capabilities it seems like you lose two or three things that are feasible when you're running your own servers. Simple shit that's just "add two lines to the nginx config and do an apt-install" becomes three sprints of custom work or whatever, or just doesn't happen because it'd be too expensive. I don't get why someone would give that stuff up unless they really, really had to.

[EDIT EDIT] I get that this rant is more about "the cloud" than distributed systems per se, but trying to build "cloud native" is the way that most orgs accidentally end up dealing with distributed systems in a much bigger way than they have to.

jasonjayr|1 year ago

> Whenever I would bring up this gap I would be told that we can't spent time and wait for people to create "magic tools".

That sounds like an awful organizational ethos. 30hrs to make a "magic tool" to save 300hrs across the organization sounds like a no-brainer to anyone paying attention. It sounds like they didn't even want to invest in out-sourced "magic tools" to help either.

lumost|1 year ago

Anecdotally, I see a major under appreciation for just how fast and efficient modern hardware is in the distributed systems community.

I’ve seen a great many engineers become so used to provisioning compute that they forget that the same “service” can be deployed in multiple places. Or jump to building an orchestration component when a simple single process job would do the trick.

intelVISA|1 year ago

Distributed systems always ends up a dumping ground of failed tech solutions to deep org dysfunction.

Weak tech leadership? Let's "fix" that with some microservices.

Now it's FUBAR? Conceal it with some cloud native horrors, sacrifice a revolving door of 'smart' disempowered engineers to keep the theater going til you can jump to the next target.

Funny because dis sys is pretty solved since Lamport, 40+ years ago.

bob1029|1 year ago

> Whenever I would bring up this gap I would be told that we can't spend time/money and wait for people to create "magic tools".

I've never once been granted explicit permission to try a different path without being burdened by a mountain of constraints that ultimately render the effort pointless.

If you want to try a new thing, just build it. No one is going to encourage you to shoot holes through things that they hang their own egos from.

fatnoah|1 year ago

> The technical aspect that contributed the most to this burnout was both the lack of observability tooling and the lack of organizational desire to invest in it.

One of the most significant "triumphs" of my technical career came at a startup where I started as a Principal Engineer and left as the VP Engineering. When I started, we had nightly outages requiring Engineering on-call, and by the time I left, no one could remember a recent issue that required Engineers to wake up.

It was a ton of work and required a strong investment in quality & resilience, but even bigger impact was from observability. We couldn't afford APM, so we took a very deliberate approach to what we logged and how, and stuffed it into an ELK stack for reporting. The immediate benefit was a drastic reduction in time to diagnose issues, and effectively let our small operations team triage issues and easily identify app vs. infra issues almost immediately. Additionally, it was much easier to identify and mitigate fragility in our code and infra.

The net result was an increase in availability from 98.5% to 99.995%, and I think observability contributed to at least half of that.

fra|1 year ago

As someone who builds observability tools for embedded software, I am flabbergasted that you're finding a more tools-friendly culture in embedded than in distributed systems!

Most hardware companies have zero observability, and haven't yet seen the light ("our code doesn't really have bugs" is a quote I hear multiple times a week!).

anonzzzies|1 year ago

I really love embedded work; at least it gives you the feeling that you have control over things. Not everything being confused and black boxed where you have to burn a goat to make it work, sometimes.

EtCepeyd|1 year ago

This resonates a lot with me.

Distributed systems require insanely hard math at the bottom (paxos, raft, gossip, vector clocks, ...) It's not how the human brain works natively -- we can learn abstract thinking, but it's very hard. Embedded systems sometimes require the parallelization of some hot spots, but those are more like the exception AIUI, and you have a lot more control over things; everything is more local and sequential. Even data race free multi-threaded programming in modern C and C++ is incredibly annoying; I dislike dealing with both an explicit mesh of peers, and with a leaky abstraction that lies that threads are "symmetric" (as in SMP) while in reality there's a complicated messaging network underneath. Embedded is simpler, and it seems to require less that practitioners become advanced mathematicians for day to day work.

beoberha|1 year ago

Yep - I’ve very much been living the former for almost a decade now. It is especially difficult when the components stretch across organizations. It doesn’t quite address what the author here is getting at, but it does make me believe that this new programming model will come from academia and not industry.

lelanthran|1 year ago

I spent the majority of my career as an embedded dev. There are ... different ... challenges, and I'm not so sure that I would want to go back to it.

It pays poorly, the tooling more often than not sucks (more than once I've had to do some sort of stub for an out-of-date gcc), observability is non-existent unless you're looking at a device on your desk, in which case your observability tool is an oscilloscope (or bus pirate type of device, if you're lucky in having the lower layers completely free of bugs).

The datasheets/application notes are almost always incomplete, with errata (in a different document) telling you "Yeah, that application note is wrong, don't do that".

The required math background can be strict as well: R/F, analog ... basically anything interesting you want to do requires a solid grounding in undergrad maths.

I went independent about 2 years ago. You know what pays better and has less work? Line of business applications. I've delivered maybe two handfuls of LoB applications but only one embedded system, and my experience with doing that as a contractor is that I won't take an embedded contract anymore unless it's a client I've already done work for, or if the client is willing to pay 75% upfront, and they agree to a special hourly rate that takes into account my need for maintaining all my own equipment.

alfiedotwtf|1 year ago

I have talked to many people in the Embedded space doing Rust, and every single one of them had the biggest grin while talking about work. Sounds like you’ll have fun :)

ithkuil|1 year ago

10 years ago I went on a similar journey. I left faang to work on a startup working on embedded firmware for esp8266. The lack of tooling was very frustrating. I ended up writing a gdb stub (before espressif released one) and a malloc debugger (via serial port) just to manage to get shit done.

bryanlarsen|1 year ago

I think you were unlucky in your distributed system job and lucky in your embedded job. Embedded is filled with crappy 3rd party and in-house tooling, far more so than distributed, in my experience. That crappiness perhaps leads to a higher likelihood to spend time on them, but it doesn't have to.

Embedded does give you a greater feeling of control. When things aren't working, it's much more likely to be your own fault.

sly010|1 year ago

I don't disagree, but funny that I recently made a point to someone that modern consumer embedded systems (with multiple MCUs connected with buses and sometimes shared memory) are basically small distributed systems, because partial restarts are common and the start/restart order of the MCUs is not very well defined. At least in the space I am working in. (Needless to say we use C, not rust)

literallyroy|1 year ago

How did you make that transition/find a position? Were you already using Rust in a previous role?

yolovoe|1 year ago

Is the “card” work EC2 Nitro by any chance? Sounds similar to what I used to do

junon|1 year ago

Can concur, I also switched mostly to firmware and have enjoyed it much more. Though Rust firmware jobs are hard to come by.

fons|1 year ago

Would you mind disclosing your current employer? I am also interested in moving to an embedded systems role.

Scramblejams|1 year ago

I've often heard embedded is a nightmare of slapdashery. Any tips for finding shops that do it right?

bagels|1 year ago

Which company? Doesn't sound like the infra org I was in at a FAANG

englishspot|1 year ago

curious as to how you made that transition. seems like that'd be tough in today's job market.

DaiPlusPlus|1 year ago

> I switched from a role working on a distributed system [...] to embedded software which runs on cards in data center racks

Would you agree that, technically (or philosophically?) that both roles involved distributed systems (e.g. the world-wide-web of web-servers and web-browsers exists as a single distributed system) - unless your embedded boxes weren't doing any network IO at all?

...which makes me genuinely curious exactly what your aforementioned distributed-system role was about and what aspects of distributed-computing theory were involved.

im_down_w_otp|1 year ago

We built a bunch of tools & technology for leveraging observability (docs.auxon.io) to do V&V, stress testing, auto root-cause analysis, etc. in clusters of embedded development (all of it built in Rust too :waves: ), since the same challenges exist for folks building vehicle platforms, lunar rovers, drones, etc. Both within a single system as well as across fleets of systems. Many embedded developers are actually distributed systems developers... they just don't think of it that way.

It's often quite a challenge to get that class of engineer to adopt things that give them visibility and data to track things down as well. Sometimes it's just a capability/experience gap and sometimes it's just over indexing on a perception of time getting to a solution vs. the time wasted on repeated problems and yak shavings.

gklitt|1 year ago

This is outside my area of expertise, but the post sounds like it’s asking for “choreographic programming”, where you can write an algorithm in a single function while reasoning explicitly about how it gets distributed:

https://en.m.wikipedia.org/wiki/Choreographic_programming

I’m curious to what extent the work in that area meets the need.

shadaj|1 year ago

You caught me! That's what my next post is about :)

roadbuster|1 year ago

How does "choreographic programming" differ from the actor model?

rectang|1 year ago

Ten years ago, I had lunch with Patricia Shanahan, who worked for Sun on multi-core CPUs several decades ago (before taking a post-career turn volunteering at the ASF which is where I met her). There was a striking similarity between the problems that Sun had been concerned with back then and the problems of the distributed systems that power so much the world today.

Some time has passed since then — and yet, most people still develop software using sequential programming models, thinking about concurrency occasionally.

It is a durable paradigm. There has been no revolution of the sort that the author of this post yearns for. If "Distributed Systems Programming Has Stalled", it stalled a long time ago, and perhaps for good reasons.

EtCepeyd|1 year ago

> and perhaps for good reasons

For the very good reason that the underlying math is insanely complicated and tiresome for mere practitioners (which, although I have a background in math, I openly aim to be).

For example, even if you assume sequential consistency (which is an expensive assumption) in a C or C++ language multi-threaded program, reasoning about the program isn't easy. And once you consider barriers, atomics, load-acqire/store-release explicitly, the "SMP" (shared memory) proposition falls apart, and you can't avoid programming for a message passing system, with independent actors -- be those separate networked servers, or separate CPUs on a board. I claim that struggling with async messaging between independent peers as a baseline is not why most people get interested in programming.

Our systems (= normal motherboards on one and, and networked peer to peer systems on the other end) have become so concurrent that doing nearly anything efficiently nowadays requires us to think about messaging between peers, and that's very-very foreign to our traditional, sequential, imperative programming languages. (It's also foreign to how most of us think.)

Thus, I certainly don't want a simple (but leaky) software / programming abstraction that hides the underlying hardware complexity; instead, I want the hardware to be simple (as little internally-distributed as possible), so that the simplicity of the (sequential, imperative) programming language then reflect and match the hardware well. I think this can only be found in embedded nowadays (if at all), which is why I think many are drawn to embedded recently.

hinkley|1 year ago

I think the underlying premise of Cloud is:

Pay a 100% premium on compute resources in order to pretend the 8 Fallacies of Distributed Computing don’t exist.

I sat out the beginning of Cloud and was shocked at how completely absent they are from conversations within the space. When the hangover hits it’ll be ugly. The Devil always gets his due.

jimbokun|1 year ago

The author critiques having sequential code executing on individual nodes, uninformed by the larger distributed algorithm in which they play a part.

However, I think there are great advantages to that style. It’s easier to analyze and test the sequential code for correctness. Then it writes a Kafka message or makes an HTTP call and doesn’t need to be concerned with whatever is handling the next step in the process.

Then assembling the sequential components once they are all working individually is a much simpler task.

bigmutant|1 year ago

The fundamental problems are communication lag and lack of information about why issues occur (encapsulated by the Byzantine Generals problem). I like to imagine trying to build a fault-tolerant, reliable system for the Solar System. Would the techniques we use today (retries, timeouts, etc) really be adequate given that lag is upwards of hours instead of milliseconds? But that's the crux of these systems, coordination (mostly) works because systems are close together (same board, at most same DC)

shadaj|1 year ago

Stay tuned for the next blog post for one potential answer :) My PhD has been focused on this gap!

hinkley|1 year ago

I don’t think there’s anyone in the Elixir community who wouldn’t love it if companies would figure out that everyone is writing software that contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Erlang, and start hiring Elixir or Gleam devs.

The future is here, but it is not evenly distributed.

ikety|1 year ago

It's so odd seeing people dissuade others for implementing "niche" languages like Elixir or Gleam. If you post a job opportunity with these languages, I guarantee you will be swamped with qualified candidates that are very passionate and excited to work with these languages full time.

rramadass|1 year ago

I decided long ago (after having implemented various protocols and shared-memory multi-threaded code) that what i like best is to use Erlang as "the fabric" for the graph of distributed computing and C/C++ for heavy lifting at any node.

jimbokun|1 year ago

Yes, my sense reading the article was the user is reinventing Erlang.

ignoramous|1 year ago

> writing software that contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Erlang

Since you were at AWS (?), you'd know that Erlang did get its shot at distributed systems there. I'm unsure what went wrong, but if not c/c++, it was all JVM based languages soon after that.

bigmutant|1 year ago

Good resources for understanding Distributed Systems:

- MIT course with Robert Morris (of Morris Worm fame): https://www.youtube.com/watch?v=cQP8WApzIQQ&list=PLrw6a1wE39...

- Martin Kleppmann (author of DDIA): https://www.youtube.com/watch?v=UEAMfLPZZhE&list=PLeKd45zvjc...

If you can work through the above (and DDIA), you'll have a solid understanding of the issues in Distributed System, like Consensus, Causality, Split Brain, etc. You'll also gain a critical eye of Cloud Services and be able to articulate their drawbacks (ex: did you know that replication to DynamoDB Secondary Indexes is eventually consistent? What effects can that have on your applications?)

ignoramous|1 year ago

> Robert Morris (of Morris Worm fame)

(of Y Combinator fame, too)

dingnuts|1 year ago

>The static-location model seems like the right place to start, since it is at least capable of expressing all the types of distributed systems we might want to implement, even if the programming model offers us little help in reasoning about the distribution. We were missing two things that the arbitrary-location model offered:

> Writing logic that spans several machines right next to each other, in a single function

> Surfacing semantic information on distributed behavior such as message reordering, retries, and serialization formats across network boundaries

Aren't these features offered by Erlang?

shadaj|1 year ago

Erlang (is great but) is still much closer to the static-location (Actors) paradigm than what I’m aspiring for. For example, if you have stateful calculations, they are typically implemented as isolated (static-location) loops that aren’t textually co-located with the message senders.

chuckledog|1 year ago

Great point. Erlang is still going strong, in fact WhatsApp is implemented in Erlang

prophesi|1 year ago

Yep, the words fault tolerance and distributed computing immediately brings to my mind Erlang/Elixir.

tracnar|1 year ago

The unison programming language does foray into a truly distributed programming language: https://www.unison-lang.org/

aDyslecticCrow|1 year ago

Functional programming languages already have alot of powerful concepts for distributed programming. Loads of the distributed programming techniques used elsewhere are often taken from an obscure fp language from years prior. Erlang comes to mind as still quite uniquely distributed without non fp comparison

Unison seems to build on it further. Very cool

KaiserPro|1 year ago

Distributed systems are hard, as well all know.

However the number of people that actually need a distributed system is pretty small. With the rise of kubernetes, the number of people who've not been burnt by going distributed when they didn't need to has rapidly dropped.

You go distributed either because you are desperate, or because you think it would be fun. K8s takes the fun out of most things.

Moreover, with machines suddenly getting vast IO improvements, the need for going distributed is much less than it was 10 years. (yes i know there is fault tolerance, but that adds another dimension of pain.)

sd9|1 year ago

> the number of people who've not been burnt by going distributed when they didn't need to has rapidly dropped

Gosh, this was hard to parse! I’m still not sure I’ve got it. Do you mean “kubernetes has caused more people to suffer due to going distributed unnecessarily”, or something else?

bormaj|1 year ago

Any specific pitfalls to avoid with K8s? I've used it to some degree of success in a production environment, but I keep deployments relatively simple.

margorczynski|1 year ago

Distributed systems are cool but most people don't really get how much complexity it introduces which leads them to fad-driven decisions like using Event Sourcing where there is no fundamental need to use it. I've seen projects getting burned because of the complexity and overhead it introduces where "simpler" approaches worked well and were easy to extend/fix. Hard to find and fix bugs, much slower feature addition and lots of more goodies the blogs with toy examples don't speak about.

nine_k|1 year ago

The best recipe I know is to start from a modular monolith [1] and split it when and if you need to scale way past a few dozen nodes.

Event sourcing is a logical structure; you can implement it with SQLite or even flat files, locally, if you your problem domain is served well by it. Adding Kafka as the first step is most likely a costly overkill.

[1]: https://awesome-architecture.com/modular-monolith/

rjbwork|1 year ago

If the choice has already been made to do a distributed system (outside of the engineer's control...), is a choice to use Event Sourcing by the engineer then a good idea?

mrkeen|1 year ago

In my experience:

1) We are surrounded by distributed systems all the time. When we buy and sell B2B software, we don't know what's stored in our partners databases, they don't know what's in ours. Who should ask whom, and when? If the data sources disagree, whose is correct? Just being given access to a REST API and a couple of webhooks is all you need to be in full distributed systems land.

2) I honestly do not know of a better approach than event-sourcing (i.e. replicated state machine) to coordinate among multiple masters like this. The only technique I can think of that comes close is Paxos - which does not depend on events. But then the first thing I would do if I only had Paxos, would be to use it to bootstrap some kind of event system on top of it.

Even the non-event-sourcing technologies like DBs use events (journals, write-ahead-logs, sstables, etc.) in their own implementation. (However that does not imply that you're getting events 'for free' by using these systems.)

My co-workers do not put any alternatives forward. Reading a database, deciding what action to do, and then carrying out said action is basically the working definition of a race-condition. Bankers and accountants had this figured out thousands of years ago: a bank can't send a wagon across the country with queries like "How much money is in Joe's account?" wait a week for the reply, and then send a second wagon saying "Update Joe's account so it has $36.43 in it now". It's laughable. But now that we have 50-150ms latencies, we feel comfortable doing GETs and POSTs (with a million times more traffic) and somehow think we're not going to get our numbers wrong.

Like, what's an alternative? I have a shiny billion-dollar fully-ACID SQL db with my customer accounts in them. And my SAAS partner bank also has that technology. Put forward literally any idea other than events that will let us coordinate their accounts such that they're not able to double-spend money, or are prevented from spending money if a node is down. I want an alternative to event sourcing.

sanity|1 year ago

The article makes great points about why distributed programming has stalled, but I think there's still room for innovation—especially in how we handle state consistency in decentralized systems.

In Freenet[1], we’ve been exploring a novel approach to consistency that avoids the usual trade-offs between strong consistency and availability. Instead of treating state as a single evolving object, we model updates as summarizable deltas—each a commutative monoid—allowing peers to merge state independently in any order while achieving eventual consistency.

This eliminates the need for heavyweight consensus protocols while still ensuring nodes converge on a consistent view of the data. More details here: https://freenet.org/news/summary-delta-sync/

Would love to hear thoughts from others working on similar problems!

[1] https://freenet.org/

Karrot_Kream|1 year ago

Haven't read the post yet (I should, I have been vaguely following y'all along but obviously not close enough!) How is this different from delta-based CRDTs? I've built (admittedly toy) CRDTs as DAGs that ship deltas using lattice operations and it's really not that hard to have it work. There's already CRDT based distributed stores out there. How is this any different?

herval|1 year ago

Throwing in my two cents on the LLM impact - I've been seeing an increasing number of systems where core part of the functionality is either LLMs or LLM-generated code (sometimes on the fly, sometimes cached for reuse). If you think distributed systems were difficult before, try to imagine a system where the code being executed _isn't even debuggable or repeatable_.

It feels like we're racing towards a level of complexity in software that's just impossible for humans to grasp.

klysm|1 year ago

That's okay though! We can just make LLMs grasp it!

synergy20|1 year ago

I saw comments about embedded development, which I have been doing that for a long time, just want to make a point here: the pay has upper limits, you will be paid fine but will reach the pay limit very fast, and it will stay there for the rest of your career. they can swap someone in with that price tag to do whatever you are working on, because, after all, embedded devel is not rocket science.

cmrdporcupine|1 year ago

The problem with embedded is its proximity to EE which is frankly underpaid.

But it's also more that the "other" kind of SWE work -- "backend" etc is frankly overpaid because of the copious quantities of $$ dumped into it by VC and ad money.

jderick|1 year ago

Distributed systems are hard. I like the idea of "semantic locality." I think it can be achieved to some degree via abstraction. The code that runs across many machines does a lot of stuff but only a small fraction of that is actually involved in coordination. If you can abstract away those details you should end up with a much simpler protocol that can be modeled in a succinct way. Then you can verify your protocol much more easily. Formal methods have used tools such as spin (promela) or guarded commands (murphi) for modeling these kinds of systems. I'm sure you could do something similar with the lean theorem prover. The tricky part is mapping back and forth between your abstract system and the real one. Perhaps LLMs could help here.

I work on hardware and concurrency is a constant problem even at that low level. We use model checking tools which can help.

cmrdporcupine|1 year ago

Two things:

Distributed systems are difficult to reason about.

Computer hardware today is very powerful.

There is a yo-yo process in our industry over the last 50 years between centralization and distribution. We necessarily distribute when we hit the limits of what centralization can accomplish because in general centralization is easier to reason about.

When we hit those junctures, there's a flush of effort into distributed systems. The last major example of this I can think of was the 2000-2010 period, when MapReduce, "NoSQL" databases, Google's massive arrays of supposedly identical commodity grey boxes (not the case anymore), the High Scalability blog, etc. were the flavour of the time.

But then, frankly, mass adoption of SSDs, much more powerful computers, etc. made a lot of those things less necessary. The stuff that most people are doing doesn't require a high level of distributed systems sophistication.

Distributed systems are an interesting intellectual puzzle. But they should be a means to an end not an end in themselves.

tonyarkles|1 year ago

> But then, frankly, mass adoption of SSDs, much more powerful computers, etc. made a lot of those things less necessary. The stuff that most people are doing doesn't require a high level of distributed systems sophistication.

I did my MSc in Distributed Systems and it was always funny (to me) to ask a super simple question when someone was presenting distributed system performance metrics that they'd captured to compare how a system scaled across multiple systems: how long does it take your laptop to process the same dataset? No one ever seemed to have that data.

And then the (in)famous COST paper came out and validated the question I'd been asking for years: https://www.usenix.org/system/files/conference/hotos15/hotos...

spratzt|1 year ago

I would go even further and argue that vast majority of businesses will never need to think about distributed systems. Modern hardware makes them irrelevant to all but the most niche of applications.

porridgeraisin|1 year ago

I think the reason that distributed systems still are the go-to choice for many software teams is to do with people/career expectations/careers orienting themselves around distributed systems over the time period you mentioned. It will take a while for it to re-orient, and then distributed systems might become a fad again ;) An example of this is typical promotion incentives being easier to get in microservice teams, thereby incentivising people to organize the team/architecture in that way.

Karrot_Kream|1 year ago

When I was graduating from my Masters (failed PhD :) this overview of various programming models is generally how I thought of things.

I've been writing distributed code now in industry for a long time and in practice, having worked at a some pretty high-scale tech companies over the years, most shops tend to favor static-location style models. As the post states, it's due largely to control and performance. Scaling external-distribution systems has been difficult everywhere I've seen it tried and usually ends up creating a few knowledgeable owners of a system with high bus-factor. Scaling tends to work fine until it doesn't and these discontinuous, sharp edges are very very painful as they're hard to predict and allocate resourcing for.

Are external-distribution systems dead ends then? Even if they can achieve high theoretical performance, operation of these systems tends to be very difficult. Another problem I find with external-distribution systems is that there's a lot of hidden complexity in just connecting, reading, and writing to them. So you want to talk to a distributed relational DB, okay, but are you using a threaded concurrency model or an async concurrency model? You probably want a connection pool so that TCP HOL blocking doesn't tank your throughput. But if you're using threads, how do you map your threads to the connections in the pool? The pool itself represents a bottleneck as well. How do you monitor the status of this pool? Tools like Istio strive to standardize this a little bit but fundamentally we're working with 3 domains here just to write to the external-distribution system itself: the runtime/language's concurrency model, the underlying RPC stack, and the ingress point for the external-distribution system.

Does anyone have strong stories of scaling an external-distribution system that worked well? I'd be very curious. I agree that progress here has stalled significantly. But I find myself designing big distributed architecture after big distributed architecture continuing to use my deep experience of architecting these systems to build static-location systems because if I'm already dealing with scaling pains and cross-domain concerns, I may as well rip off the band-aid and be explicit about crossing execution domains.

kodablah|1 year ago

> Just like the external-distribution model, arbitrary-location architectures often come with a performance cost. Durable execution systems typically snapshot their state to a persistent store between every step.

This is not true by most definitions of "snapshot". Most (all?) durable execution systems use event sourcing and therefore it's effectively an immutable event log. And it's only events that have external side effects enough to rebuild the state, not all state. While technically this is not free, it's much more optimal than the traditional definition of capturing and storing a "snapshot".

> But this simplicity comes at a significant cost: control. By letting the runtime decide how the code is distributed [...] we don’t want to give up: Explicit control over placement of logic on machines, with the ability to perform local, atomic computations

Not all durable execution systems require you to give this up completely. Temporal (disclaimer: my employer) allows grouping of logical work by task queue which many users use to pick locations of work, even so far as a task queue per physical resource which is very common for those wanting that explicit control. Also there are primitives for executing short, local operations within workflows assuming that's what is meant there.

nchammas|1 year ago

There is an old project out of Berkeley called BOOM [1] that developed a language for distributed programming called Bloom [2].

I don't know enough about it to map it to the author's distributed programming paradigms, but the Bloom features page [3] is interesting:

> disorderly programming: Traditional languages like Java and C are based on the von Neumann model, where a program counter steps through individual instructions in order. Distributed systems don’t work like that. Much of the pain in traditional distributed programming comes from this mismatch: programmers are expected to bridge from an ordered programming model into a disordered reality that executes their code. Bloom was designed to match–and exploit–the disorderly reality of distributed systems. Bloom programmers write programs made up of unordered collections of statements, and are given constructs to impose order when needed.

[1]: https://boom.cs.berkeley.edu

[2]: http://bloom-lang.net/index.html

[3]: http://bloom-lang.net/features/

jmhucb|1 year ago

Good pattern matching. Bloom is a predecessor project to the OP's PhD thesis work :-) This area takes time and many good ideas to mature, but as the post hints, progress is being made.

th0ma5|1 year ago

Since multicore processing a ton of software you use or create is distributed you have to ask if you want to be in control of how it is distributed or not. If you want it easy and let the library figure it out then you have to accept the topological ideas it has. For instance H2O is a great machine learning package that even has its own transparent multi core processing. If you want to go across machines it has its own cluster built in. You can also install it into Hadoop, Spark, etc but once you start going that direction you're more and more on the hook for what that means and if it even is more effective for your problem and what your distributed strategy should be.

Things like re-entrant idempotence, software transactional memory, copy on write, CRDTs etc are going to have waste and overhead but can vastly simplify conceptually the ongoing development and maintenance of even non-distributed efforts in my opinion, and we keep having the room to eat the overhead.

There's a ton of bias against this for good reasons that the non distributed concepts still just work without any hassle but we'd be less in the mud in a fundamental way of we learned to let go of non-eventual consistency.

gregw2|1 year ago

What I noticed missing in this analysis of distributed systems programming was a recognition/discussion of how distributed databases (or datalakes) decoupling storage from compute have changed the art of the possible.

In the old days of databases, if you put all your data in one place, you could scale up (SMP) but scaling out (MPP) really was challenging. Nowdays, you (iceberg), or a DB vendor (Snowflake, Databricks, BigQuery, even BigTable, etc), put all your data on S3/GCS/ADLS and you can scale out compute to read traffic as much as you want (as long as you accept something like a snapshot isolation read level and traffic is largely read-only or writes are distributed across your tables and not all to one big table.)

You can now share data across your different compute nodes or applications/systems by managing permissions pointers managed via a cloud metadata/catalog service. You can get microservice databases without each having completely separate datastores in a way.

hintymad|1 year ago

This reminds me of Rob Pike's article "Systems Software Research is Irrelevant," written about 15 years ago. Perhaps many systems have matured to a point where any improvement appears incremental to engineers, so the conviction to develop a new programming model isn't strong enough. Or perhaps we're in a temporary plateau, and a groundbreaking tool will emerge in a few years.

Regarding Laddad's point, building tools native to distributed systems programming might be intrinsically difficult. It's not for lack of trying. We've invented numerous algebras, calculi, programming models, and experimental programming languages over the past decades, yet somehow none has really taken off. If anything, I'd venture to assert that object storage, perhaps including Amazon DynamoDB, has changed the landscape of programming distributed systems. These two systems, which optimize for throughput and reliability, make programming distributed systems much easier. Want a queue system? Build on top of S3. Want a database? Focus on query engines and outsource storage to S3. Want a task queue? Just poll DDB tables. Want to exchange states en masse? Use S3. The list goes on.

Internally to S3, I think the biggest achievement is that S3 can use scalability to its advantage. Adding a new machine makes S3 cheaper, faster, and more reliable. Unfortunately, this involves multiple moving parts and is therefore difficult to abstract into a tool. Perhaps an arbitrarily scalable metadata service is what everyone could benefit from? Case in point, Meta's warm storage can scale to multiple exabytes with a flat namespace. Reading the paper, I realized that many designs in the warm storage are standard, and the real magic lies in its metadata management, which happens to be outsourced to Meta's ZippyDB. Meanwhile, open-source solutions often boast about their scalability, but in reality, all known ones have certain limits, usually no more than 100PBs or a few thousand nodes.

nyrikki|1 year ago

> Distributed SQL Engines

This is what I see holding some applications back.

The relational model is flexible and sufficient for many needs but the ACID model is responsible for much of the complexity in some more recent solutions.

While only usable for one-to-many relationships, the hierarchical model would significantly help in some of the common areas like financial transactions.

Think IBM IMS fastpath, and the related channel model.

But it seems every neo paradime either initially hampers itself, or grows to be constrained by Codd's normalization rules, which result in transitive closure a the cost of independence.

As we have examples like Ceph's radios, Kafka etc...if you view the hierarchical file path model as being intrinsic to that parent child relationship we could be distributed.

Perhaps materialized views could be leveraged to allow for SQL queries without turning the fast path into a distributed monolith.

SQL is a multi tool, and sometimes you just need to use a specific tool.

taeric|1 year ago

I'm not clear what the proposal here is? It specifically eschews tooling as a driver in the solution, but why? Wouldn't tooling be one of the most likely areas to get solid progress, as you could make tooling and point it at existing products.

Would be interesting to see comparisons to other domains. Surely you could look at things like water processing plants to see how they build and maintain massive structures that do coordinated work between parts of it? Power generation plants. Assembly factories. Do we not have good artifacts for how these things are designed and reasoned about?

mgraczyk|1 year ago

The author is missing information about LLMs. In the "Obligatory LLM Section" he focuses on distributed systems that use LLMs.

But almost all of the new innovation I'm familiar with in distributed systems is about training LLMs. I wouldn't say the programming techniques are "new" in the way this post is describing them, but the specifics are pretty different from building a database or data pipeline engine (less message oriented, more heavily pipelined, more low level programming, etc)

riku_iki|1 year ago

> But almost all of the new innovation I'm familiar with in distributed systems is about training LLMs

I think database space is still hot topic with many unsolved problems.

tayo42|1 year ago

I agreee it has stalled, I think for almost everyone what the author considers bandaid is practical enough that isn't a need for innovation. Distributed systems is more or less solved imo.

Agree with another commenter, observability tools do suck. I think that's true in general for software beyond a certain amount of complexity. Storing large amounts of data for observability is expensive.

ptmcc|1 year ago

I'm very impressed with the quality of some observability tools like Datadog, which do many good things either automatically or very easily. The usability is leaps and bounds ahead of things like New Relic or the manual instrumentation intensive open source tools. But yes, the costs are insane and require some diligence to keep from running too wild, like most SaaS products these days.

But ultimately we pay it because it gives us incredibly valuable insights and has saved us countless hours in incident response, debugging, and performance profiling. It's lowered my stress level significantly.

sakesun|1 year ago

I have been told and believed that we are getting very close to the ultimate answer throughout my career since CORBA/DCOM.

Just learn that there is another discontinued attempt https://serviceweaver.dev/

anonymousDan|1 year ago

This article is just word salad. In what way does Redis 'abstract distribution' for example?

anacrolix|1 year ago

I've been trying to explain this to people for 8 years. All of our existing languages side step the problem. Developers are writing distributed systems every day but seem oblivious to the fact their tools aren't helping at all.

hinkley|1 year ago

I am so appalled every time I ask a group of devs and get the same answer that I’ve just stopped asking. How many of you took a distributed programming class in college? And it turns out yet again that not only am I the only one, but that none of them recollect it even being in the course catalog.

For me it was a required elective (you must take at least one of these 2-3 classes). And I went to college while web browsers were being invented.

When Cloud this and Cloud that started every university should have added it to the program. What the fuck is going on with colleges?

shermantanktop|1 year ago

My .02 is that any topic sufficiently important shouldn't be left to colleges. You can force feed a complex topic to a bunch of undergrads, but they will forget 95% of it, and 5 years later they'll say "ohhh, I think I have a textbook on that in my parents' basement."

The reality is that most of this profession is learned on the job, and college acts as a filter at the start of the funnel. If someone is not capable of picking up the Paxos paper, then having had someone tell them about it 5 years ago when they had a hangover won't help.

klysm|1 year ago

Definitely a problem, but I think there's always a gap here. What are colleges optimizing for with computer science programs? I would wager there is an incentive problem at the core which is causing these gaps to occur.

daedrdev|1 year ago

Same situation here. It was one of like 6 options and I had to take 2 of them. I found that I learned a lot from the class, but I was literally one of 7 people taking it that semester in a massive university.

lifeisstillgood|1 year ago

This is a massive coming issue - I am not sure “distributed” can be exactly replaced with “parallel processing” but it’s close

So to simplify, from 1985 to 2005 ish you could keep sequential software exactly the same and it just ran faster each new hardware generation. One CPU but transistors got smaller and (hand wavy, on chip ram, pipelining )

Then roughly around 2010 single CPUs just stopped magically doubling. You got more cores, but that meant parallel or distributed programming - your software that in 1995 served 100 people was the same serving 10,000 people in 2000. But in 2015 we needed new coding - we got NOSQL and map reduce and facebook data centres.

But the hardware kept growing

TSMC now has wafer scale chips with 900,000 cores - but my non parallel, on distributed code won’t run 1 million times faster - Amdahls law just won’t let me

So yeah - no one wants to buy new chips with a million cores because you aren’t going to get the speed ups - why buy an expensive data centre full of 100x cores if you can’t sell them at 100x usage.

rstuart4133|1 year ago

> Although static-location architectures offer developers the most low-level control over their system, in practice they are difficult to implement robustly without distributed systems expertise.

This is the understatement of the article. There are two insanely difficult things to get right in computers. One is cryptography, and other is distributed systems. I'd argue the latter is harder.

The reason simple enough to understand. In any program the programmer has to carry in his head every piece of state that is accessible at any given point, the invariants that apply to that state, and the code responsible for modifying that state while preserving the invariants. In sequential programs the code that can modify the shared state is restricted to inner loops and functions you call, and you have to verify every modification preserves the invariants. It's a lot. The hidden enemy is aliasing, and you'll find entire books written on the counter measures like immutable objects, function programming, and locks. Coordinating all this is so hard only a small percentage of the population can program large systems. I guess you are thinking "but of a lot of people here can do that". True, but we are a tiny percentage.

In distributed systems those blessed restrictions a single execution thread gives us on what code can access shared state goes out the window. Every line that could read or write the shared state has to be considered, whether its adjacent or not, whether you called it here or not. The state interactions explode in the same way interactions between qubits explode. Both explode beyond the capability of human minds to assemble them all in one place. You have to start forming theorems and formulating proofs.

That worst part is newbie programmers are not usually aware this explosion has taken place. That's why experienced software engineers give the following advice on threads: just don't. You don't have a feel for what will happen, your code will appear to work when you test it while being rabbit warren of disastrous bugs that will likely never be fixed. It's why Linux RCU author Paul McKenney is still not confident his code is correct, despite being one of the greatest concurrent programming minds on the planet. It's why Paxos is hard to understand despite being relatively simple.

Expecting an above average programmer to work on a distributed system and not introduce bugs without leaning on one of one of the "but it is inefficient" tools he lists is an impossible dream. A merely experienced average has no hope. It's hard. Only a tiny, tiny fraction of the programmers on the planet can pull it off kind of hard.

ConanRus|1 year ago

No Erlang mention? Sad.

cruelmathlord|1 year ago

it has been a while since I've seen innovation in this arena. My guess is that other domains of programming have eaten its lunch

thway15269037|1 year ago

Oh god, even this article has AI and LLM section in it. When I thought distributed system design could not get any worse, someone actually pitched AI slop in it.

God I want to dig a cave and live in it.

Nevermark|1 year ago

> When I thought distributed system design could not get any worse, someone actually pitched AI slop in it.

I am not sure that pointing out that today's models are going to be MUCH worse at reasoning about distributed code than serial code is "pitching".

Conversely, pointing out that the reason they are so bad at distributed is the lack of related information locality, the same problem humans often have, puts a reasonable second underline on the value of more locality in our development artifacts.