(no title)
mehrant | 11 months ago
this can be a good learning opportunity for both of us (potentially more for us) :)
if you're interested, please send us an email to support@hpkv.io and we can arrange that
mehrant | 11 months ago
this can be a good learning opportunity for both of us (potentially more for us) :)
if you're interested, please send us an email to support@hpkv.io and we can arrange that
mehrant|11 months ago
this is 1M records, 3M operations on a single node, single thread, recorded in real time (1x).
I understand that without access to the source of test program it's hard to trust, but we can arrange that if you decided to take on that call :)
pclmulqdq|11 months ago
In every persistent database, that number indicates that an entry was written to a persistent write-ahead log and that the written value will stay around if the machine crashes immediately after the write. Clearly you don't do this because it's impossible to do in 600 ns. For most of the non-persistent databases (eg redis, memcached), write latency is about how long it takes for something to enter the main data structure and become globally readable. Usually, "write done" also means that the key is globally readable with no extra performance cost (ie it was not just dumped into a write-ahead log in memory and then returned).
In a world where you spoke about the product more credulously or where code was open-source, I might accept that this was the case. As it stands, it looks like:
1. This was your "marketing gimmick" number that you are trying to sell (every database that isn't postgres has one).
2. You got it primarily by compromising on the meaning of "write done," and not on the basis of good engineering.