How do you educate people on stream processing? For pipeline like systems stream processing is essential IMO - backpressure/circuit breakers/etc are critical for resilient systems. Yet I have a hard time building an engineering team that can utilize stream processing; Instead of just falling back on synchronous procedures that are easier to understand (But nearly always slower and more error prone)
serial_dev|10 months ago
I worked on stream processing, it was fun, but I also believe it was over-engineered and brittle. The customers also didn't want real-time data, they looked at the calculated values once a week, then made decisions based on that.
Then, I joined another company that somehow had money to pay 50-100 people, and they were using CSV, sh scripts, batch processing, and all that. It solved the clients' needs, and they didn't need to maintain a complicated architecture and the code that could have been difficult to reason about otherwise.
The first company with the stream processing after I left, was bought by a competitor at fire sale price, some of the tech were relevant for them, but the stream processing stuff was immediately shut down. The acquiring company had just simple batch processing and they were printing money in comparison.
If you think it's still worth going with stream processing, give your reasoning to the team, and most reasonable developers would learn it if they really believe it's a significantly better solution for the given problem.
Not to over-simplify, but if you can't convince 5 out of 10 people to learn to make their job better, it's either that the people are not up to the task, or you are wrong that stream processing would make a difference.
nemothekid|10 months ago
Systems that needed complex streaming architectures in 2015 could probably be handled today with fast disk and large postgres instance (or BigQuery).
senderista|10 months ago
wwarner|10 months ago
jandrewrogers|10 months ago
Companies are organized around an operational tempo that reflects what their systems are capable of. Even if you replace one of their systems with a real-time or quasi-real-time stream processing architecture, nothing else in the organization operates with that low of a latency, including the people. It is a very heavy lift to even ask them to reorganize the way they do things.
A related issue is that stream processing systems still work poorly for some data models and often don’t scale well. Most implementations place narrow constraints on the properties of the data models and their statefulness. If you have a system sitting in the middle of your operational data model that requires logic which does not fit within those limitations then the whole exercise starts to break down. Despite its many downsides, batching generalizes much better and more easily than stream processing. This could be ameliorated with better stream processing tech (as in, core data structures, algorithms, and architecture) but there hasn’t been much progress on that front.
timeinput|10 months ago
My concept of stream processing is trying to process gigabits to gigabytes a second, and turn it into something much much smaller so that it's manageable to database and analyze. To my mind for 'stream processing' calling malloc is sometimes too expensive let alone using any of the technologies called out in this tech stack.
I understand back pressure, and circuit breakers, but they have to happen at the OS / process level (for my general work) -- a metric that auto scales a microservice worker after going through prometheus + an HPA or something like that ends up with too many inefficiencies to make things practical. A few threads on a single machine just work, but end up taking ages to engineer a 'cloud native' solution.
Once I'm down to a job a second (and that job takes more than a few seconds to run to hide the framework's overhead) or less things like Airflow start to work, and not just fall flat, but at that point are these expensive frame works worth it? I'm only producing 1-1000 jobs a second.
Stream processing with these frameworks like Faust, Airflow, Kafka Streams etc, all just seem like brittle overkill once you start trying to actually deploy and use them. How do I tune the PostgreSQL database for Airflow? How do I manage my S3 life cycles to minimize cost?
A task queue + an HPA really feels more like the right kind of thing to me at that scale vs really caring too much about back pressure, etc when the data rate is 'low', but I've generally been told by colleagues to reach for more complicated stream processors that perform worse, are (IMO) harder to orchestrate, and (IMO) harder to manage and deploy.