(no title)
dvhh | 4 months ago
But the application was fairly low volume in Data and usage, So eventual consistency and capacity was not an issue. And yes timestamp monotonicity is not guaranteed when multiple client upload at the same time so unique id was given to each client at startup and used for to add guarantee of entries name. Metadata and prefix were used to indicate state of object during processing.
Not ideal, but it was cheaper that a DB or a dedicated MQ. The application did not last, but would try again the approach if adapted to stuation.
hamandcheese|4 months ago
I was thinking that writes could be indexed/prefixed into timestamp buckets according to the clients local time. This can't be trusted, of course. But the application consumers could detect and reject any writes whose upload timestamp exceeds a fixed delta from the timestamp bucket it was uploaded to. That allows for arbitrary seeking to any point on the log.