(no title)
das-bikash-dev | 6 days ago
I've built multi-channel chat infrastructure and the honest answer is: keep the monolith until you have a specific scaling bottleneck, not a theoretical one.
One pattern that helped was normalizing all channel-specific message formats into a single internal message type early. Each channel adapter handles its own quirks (some platforms give you 3 seconds to respond, others 20, some need deferred responses) but they all produce the same normalized message that the core processing pipeline consumes. This decoupling is what made it possible to split later without rewriting business logic.
On Redis pub/sub specifically: for a solo dev, skip it until you actually have multiple server instances that need to share state. A single process with WebSocket sessions in memory is fine for early users. The complexity cost of pub/sub isn't worth it until you need horizontal scaling or have a separate worker process pushing messages.
What's your current message volume like? That usually determines timing better than architecture diagrams.
JohannaWeb|4 days ago
Right now Falcon is still very early — message volume is basically zero outside of local testing. The service split isn’t driven by traffic yet, it’s more about separating identity/trust from messaging so I don’t entangle community membership logic with transport.
The internal normalization point you mentioned is something I’m trying to do early: the goal is a single internal message/event model that adapters (WebSocket, future federation, etc.) translate into, so the core pipeline stays stable if/when the runtime topology changes.
On Redis/pub-sub: totally fair. I’m not running multi-instance yet. JetStream is more experimental at this stage — mostly exploring how identity-aware events propagate, not solving scale today.