I ran into memory issues with a high-load project so I built a compact binary encoder/decoder for Pydantic models. It cut in-RAM object size by up to 7× vs json.dumps(), and ended up saving the whole service from collapsing.
I've had exactly the same situation, ~2M MAU service with REDIS as the only persistence system, all data being JSON serialized Pydantic models. The storage overhead was just terrible and cost real money.
This would have been a super nice to have back then.
I wonder though how much sense it would make to get something like this mainlined into upstream Pydantic? as having this downstream would give many continuity and dependency lock concerns. And having it as part of the main library would significantly drive adoption rate.
sijokun|4 months ago
GitHub: https://github.com/sijokun/PyByntic
Works with annotated Pydantic models and gives you: – .serialize() -> bytes – .deserialize(bytes) -> Model
Curious to hear whether others here have hit similar problems and how you solved it?
P.S. Project was a Telegram MiniApp with 10m+ MAU, we were storing cached user objects in Redis Cluster
Locutus_|4 months ago
This would have been a super nice to have back then.
I wonder though how much sense it would make to get something like this mainlined into upstream Pydantic? as having this downstream would give many continuity and dependency lock concerns. And having it as part of the main library would significantly drive adoption rate.