top | item 46708055

Show HN: S2-lite, an open source Stream Store

77 points| shikhar | 1 month ago |github.com

S2 was on HN for our intro blog post a year ago (https://news.ycombinator.com/item?id=42480105). S2 started out as a serverless API — think S3, but for streams.

The idea of streams as a cloud storage primitive resonated with a lot of folks, but not having an open source option was a sticking point for adoption – especially from projects that were themselves open source! So we decided to build it: https://github.com/s2-streamstore/s2

s2-lite is MIT-licensed, written in Rust, and uses SlateDB (https://slatedb.io) as its storage engine. SlateDB is an embedded LSM-style key-value database on top of object storage, which made it a great match for delivering the same durability guarantees as s2.dev.

You can specify a bucket and path to run against an object store like AWS S3 — or skip to run entirely in-memory. (This also makes it a great emulator for dev/test environments).

Why not just open up the backend of our cloud service? s2.dev has a decoupled architecture with multiple components running in Kubernetes, including our own K8S operator – we made tradeoffs that optimize for operation of a thoroughly multi-tenant cloud infra SaaS. With s2-lite, our goal was to ship something dead simple to operate. There is a lot of shared code between the two that now lives in the OSS repo.

A few features remain (notably deletion of resources and records), but s2-lite is substantially ready. Try the Quickstart in the README to stream Star Wars using the s2 CLI!

The key difference between S2 vs a Kafka or Redis Streams: supporting tons of durable streams. I have blogged about the landscape in the context of agent sessions (https://s2.dev/blog/agent-sessions#landscape). Kafka and NATS Jetstream treat streams as provisioned resources, and the protocols/implementations are oriented around such assumptions. Redis Streams and NATS allow for larger numbers of streams, but without proper durability.

The cloud service is completely elastic, but you can also get pretty far with lite despite it being a single-node binary that needs to be scaled vertically. Streams in lite are "just keys" in SlateDB, and cloud object storage is bottomless – although of course there is metadata overhead.

One thing I am excited to improve in s2-lite is pipelining of writes for performance (already supported behind a knob, but needs upstream interface changes for safety). It's a technique we use extensively in s2.dev. Essentially when you are dealing with high latencies like S3, you want to keep data flowing throughout the pipe between client and storage, rather than go lock-step where you first wait for an acknowledgment and then issue another write. This is why S2 has a session protocol over HTTP/2, in addition to stateless REST.

You can test throughput/latency for lite yourself using the `s2 bench` CLI command. The main factors are: your network quality to the storage bucket region, the latency characteristics of the remote store, SlateDB's flush interval (`SL8_FLUSH_INTERVAL=..ms`), and whether pipelining is enabled (`S2LITE_PIPELINE=true` to taste the future).

I'll be here to get thoughts and feedback, and answer any questions!

20 comments

order

csense|1 month ago

When someone says "stream data over the Internet," my automatic reaction is "open a TCP connection."

Adding a database, multiple components, and Kubernetes to the equation seems like massively overengineering.

What value does S2 provide that simple TCP sockets do not?

Is this for like "making your own Twitch" or something, where streams have to scale to thousands-to-millions of consumers?

shikhar|1 month ago

This is fair question. A stream here == a log. Every write with S2 implementations is durable before it is acknowledged, and it can be consumed in real-time or replayed from any position by multiple readers. The stream is at the granularity of discrete records, rather than a byte stream (although you can certainly layer either over the other).

ED: no k8s required for s2-lite, it is just a singe binary. It was an architectural note about our cloud service.

shikhar|1 month ago

> Is this for like "making your own Twitch" or something, where streams have to scale to thousands-to-millions of consumers?

Yes, this can be a good building block for broadcasting data streams.

s2-lite is single node, so to scale to that level, you'd need to add some CDN-ing on top.

s2.dev is the elastic cloud service, and it supports high fanout reads using Cachey (https://www.reddit.com/r/databasedevelopment/comments/1nh1go...)

shikhar|1 month ago

Shoutout to CodesInChaos for suggesting that instead of a mere emulator, should have an actually durable open source implementation – that is what we ended up building with s2-lite! https://news.ycombinator.com/item?id=42487592

And it has the durability of object storage rather than just local. SlateDB actually lets you also use local FS, will experiment with plumbing up the full range of options - right now it's just in-memory or S3-compatible bucket.

> So I'd try so share as much of the frontend code (e.g. the GRPC and REST handlers) as possible between these.

Right on, this is indeed the case. The OpenAPI spec is also now generated off the REST handlers from s2-lite. We are getting rid of gRPC, s2-lite only supports the REST API (+ gRPC-like session protocol over HTTP/2: https://s2.dev/docs/api/records/overview#s2s-spec)

michaelmior|1 month ago

> We are getting rid of gRPC

I'm curious why and what challenges you had with gRPC. s2-lite looks cool!

vogtb|1 month ago

Neat! Having literally everything backed by object storage is The Dream, so this makes a lot of sense. So to compare this to the options that are available (that aren't Kafka or Redis streams) I can imagine you could take these items that you're writing to a stream, batch them and write them into some sort of S3-backed data lake. Something like Delta Lake. And then query them using I don't know DuckDB or whatever your OLAP SQL thing is. Or you could what develop your own S3 schema that that's just saving these items to batched objects as they come in. So then part what S2 is saving you from is having to write your own acknowledgement system/protocol for batching these items, and the corresponding read ("consume") queries? Cool!

shikhar|1 month ago

Yes, that is a reasonable way to think about it! And as s2-lite is designed as a single-node system, there is a natural source of truth on what the latest records are for consuming in real-time.

DTE|1 month ago

Love this. Elegant and powerful. Stateful streams are surprisingly difficult to DIY and as everything becomes a stream of tokens this is super useful tool to have in the toolbox.

kwkelly|1 month ago

Can this be used as an embedded lib instead of a separate binary as an API?

And am I understanding correctly that if I pointed 2 running instances of s2-lite at the same place in s3 there would be problems since slatedb is single writer?

shikhar|1 month ago

> Can this be used as an embedded lib instead of a separate binary as an API?

Did not architect explicitly for that, but should be viable. You could use the `Backend` directly, is what the REST handlers call https://docs.rs/s2-lite/latest/s2_lite/backend/struct.Backen...

Happy to accept contributions that make this more ergonomic.

> And am I understanding correctly that if I pointed 2 running instances of s2-lite at the same place in s3 there would be problems since slatedb is single writer?

SL8 will fence the older writer, thanks to S3 conditional writes. I think there would be potential for stale reads until the fencing happens...

ED: Fresh discussion in https://discord.com/channels/1232385660460204122/12323856609...

The stale read potential can be mitigated, https://github.com/s2-streamstore/s2/issues/91

up2isomorphism|1 month ago

As someone worked in AWS, I will laughing at some one who really believes that s3 is “bottomless”.

Also there seems not much use cases nowadays want this, if there are any, they already use Kafka.

solaris2007|1 month ago

> laughing at some one who really believes that s3 is “bottomless”.

Please elaborate on this.

arpinum|1 month ago

Would be useful to have SlateDB WAL go to Valkey or somewhere else to reduce s3 put costs and latency.