(no title)
dsies | 4 years ago
I built https://batch.sh specifically to address this - not having to reinvent the wheel for storage, search and replay.
For some cases, storing your events in a pg database is probably good enough - but if you're planning on storing billions of complex records AND fetching a particular group of them every now and then - it'll get rough and you need a more sophisticated storage system.
What storage mechanism did you use? And how did you choose which events to replay?
acjohnson55|4 years ago
All command processing and state queries required us to read and process the whole event log. But this was fairly cheap, because we're talking maybe a couple dozen events per lot. To optimize, we might have considered serializing core aspects of the derived state, to use as a snapshot. But this wasn't necessary.
Batch looks pretty cool! I'll keep that in mind next time I'm considering reinventing the wheel :)