> BigCache does not handle collisions. When new item is inserted and it's hash collides with previously stored item, new item overwrites previously stored value.
I'm not sure what hashing algorithm it is using, but this seems like a pretty undesirable property.
// Sum64 gets the string and returns its uint64 hash value.
func (f fnv64a) Sum64(key string) uint64 {
var hash uint64 = offset64
for i := 0; i < len(key); i++ {
hash ^= uint64(key[i])
hash *= prime64
}
return hash
}
The referenced `offset64` is `14695981039346656037` and the `prime64` is `1099511628211`.
I can find a reference to a similarly named `Sum64` function in https://godoc.org/blainsmith.com/go/seahash#Sum64 which indicates SeaHash is a non-cryptographic hash function and further considers `Sum64` to be a checksum function.
I'm guessing there's a lot more collisions possible here than otherwise expected.
It's using FNV-1a [1] which is indeed a non-cryptographic hash function and is a perfectly suitable choice for a hash table [2]. The goals are speed and low chance of collision, not security.
It's a pity they use a fixed offset basis for FNV-1a. For many non-persistent use cases it's perfectly valid to use a randomized offset basis. This makes it harder for external parties to come up with keys that would cause a high collision rate.
This is how every hash table acts by default in almost every language I've used. If a key already exists in the hash, the value is overwritten. I think the only exception was in Java and you had to use a specific hash object to get the effect you are stating.
(author of Ristretto here) Yeah, I'm surprised not enough people here are talking about Ristretto. It was actually designed with knowledge of the existing Go caches, to deal with their limitations -- based on multiple papers and Caffeine [1], popular in Java world. In particular, achieving higher hit ratios while avoiding contention caused by heavy concurrent access.
This is very cool. But why didn't they just go with Redis? Their blog stated that this requirement: "be very fast even with millions of entries" wasn't met by Redis, but I have a very hard time believing that.
Others have mentioned that the article calls out ignoring traditional in-memory cache daemons because of the additional network time, but with a targeted p50 response time (of their HTTP service fronting all of this), and caches like Redis and memcached being able to respond in hundreds of microseconds... it does feel like they didn't actually run the numbers.
The other natural alternative would simply be to run Redis/memcached and colocate this HTTP service on the same box. Now your "network latency" component is almost entirely negligible, and you've deferred the work of managing memory to applications _designed_ around doing so.
There’s no network latency involved as it’s an in memory, embedded database.
Also, max throughput here is ultimately a sum of how fast the memory allocator is and memory bandwidth which, even with a poor implementation would be orders of magnitude more than what you can do with standard NICs and Linux kernel networking stack.
Moreover, as a consequence, far less context switching is involved.
Per the article, Redis requires a network call to the redis server. They wanted an in memory cache. Not sure how would you keep the per machine in memory cache in synch without change propagation to all the machines. This looks like a simple TTL cache.
1. Redis has a lot of functionality that a simple cache client doesn't need.
2. Redis's connection model can lead to complications.
3. Redis's to-disk checkpointing in practice uses a lot of memory.
4. Redis's poorly chosen default settings have cost the industry an uncounted but large sum of money.
5. Redis is written in C. That's a bad idea for a networked application.
6. Redis's creator is a person who doesn't deserve our support. He's constantly combative with experts who have give him good advice about how to improve Redis because he has a vision of "simplicity" which translates to "what I already understand."
Redis is a decent choice if you need all of its features. It's got a wide spectrum. But, if you don't need ALL of them, then pick a simpler and better designed system.
My understanding is that global variables in Go are not garbage collected. Is that true? If so then is this not creating the cache struct as a global var and therefore this cache will never be garbage collected? https://github.com/allegro/bigcache/blob/master/server/serve...
I ask this because I’ve run into a problem with this in the past.
None of these projects or exercises are futile: something is learned in the process, and the language gets pushed into new areas, algorithms get proved out etc but it seems that Go is becoming the next hammer and every problem in the problem set has become a nail. Personally I would have just stuck with Redis
I have not used this package but there is another one for Go called freecache that offers a similar solution. The primary benefit of such solutions is that if you have a Go service where you want to rely on the local caching of data, using a package like this can significantly reduce GC scan pressure removing the work that Go needs to do when analyzing pointer data sitting on the heap.
Why does this help? Instead of keeping millions upon millions of items alive on the heap (leading to Go having to scan all such data) you can instead serialize/deserialize the data in a solution like this with usually minimal overhead. Storing your cached data in a solution like this suddenly gets rid of the need to have live pointers of data on the heap. This is because your data is now stored as []byte slice somewhere in the Cache data structure that this code uses. Finally, since packages like BigCache/Freecache are built with only a very small handful of heap objects the Go runtime performance can now go back to what it does best which is spending most of its time in your application logic.
If anyone has any doubts of this approach try it out...we saw dramatic differences with using a package like this vs a naive map of pointer based data or vs something like Hashicorps LRU datastructure.
The last service we applied this model changed CPU profile from running at around 900% to 400%. That was a big win in my book and practically cut our cluster size in half.
> Storing your cached data in a solution like this suddenly gets rid of the need to have live pointers of data on the heap.
So if I understand this correctly, Go does not do any recursive scanning of structures? Each unique bit of data owns it data for as long as it needs to?
People considering this should benchmark both options. Constantly deserializing everything you need has a predictable but very high cost, often worse for throughput than a well-tuned GC.
[+] [-] merlincorey|6 years ago|reply
I'm not sure what hashing algorithm it is using, but this seems like a pretty undesirable property.
The code for the hash function is found here: https://github.com/allegro/bigcache/blob/f64abe8f4fe2f5769bd...
The referenced `offset64` is `14695981039346656037` and the `prime64` is `1099511628211`.I can find a reference to a similarly named `Sum64` function in https://godoc.org/blainsmith.com/go/seahash#Sum64 which indicates SeaHash is a non-cryptographic hash function and further considers `Sum64` to be a checksum function.
I'm guessing there's a lot more collisions possible here than otherwise expected.
[+] [-] meritt|6 years ago|reply
[1] https://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo...
[2] https://softwareengineering.stackexchange.com/questions/4955...
[+] [-] EdSchouten|6 years ago|reply
[+] [-] thrownaway954|6 years ago|reply
This is how every hash table acts by default in almost every language I've used. If a key already exists in the hash, the value is overwritten. I think the only exception was in Java and you had to use a specific hash object to get the effect you are stating.
[+] [-] cyri|6 years ago|reply
[+] [-] mrjn|6 years ago|reply
This is worth a read: https://blog.dgraph.io/post/introducing-ristretto-high-perf-...
[1]: https://github.com/ben-manes/caffeine
[+] [-] SergeAx|6 years ago|reply
[+] [-] reinhardt1053|6 years ago|reply
[+] [-] shepardrtc|6 years ago|reply
[+] [-] tobz|6 years ago|reply
Others have mentioned that the article calls out ignoring traditional in-memory cache daemons because of the additional network time, but with a targeted p50 response time (of their HTTP service fronting all of this), and caches like Redis and memcached being able to respond in hundreds of microseconds... it does feel like they didn't actually run the numbers.
The other natural alternative would simply be to run Redis/memcached and colocate this HTTP service on the same box. Now your "network latency" component is almost entirely negligible, and you've deferred the work of managing memory to applications _designed_ around doing so.
[+] [-] Eikon|6 years ago|reply
Also, max throughput here is ultimately a sum of how fast the memory allocator is and memory bandwidth which, even with a poor implementation would be orders of magnitude more than what you can do with standard NICs and Linux kernel networking stack.
Moreover, as a consequence, far less context switching is involved.
[+] [-] harshvladha|6 years ago|reply
"we decided to give up external caches like Redis, Memcached or Couchbase mainly because of additional time needed on the network"
reason: additional time needed on the network.
there are many scenarios in which you will want to consider in-mem cache of the application, one scenario is very low latency requirements.
[+] [-] dingribanda|6 years ago|reply
[+] [-] KirinDave|6 years ago|reply
1. Redis has a lot of functionality that a simple cache client doesn't need.
2. Redis's connection model can lead to complications.
3. Redis's to-disk checkpointing in practice uses a lot of memory.
4. Redis's poorly chosen default settings have cost the industry an uncounted but large sum of money.
5. Redis is written in C. That's a bad idea for a networked application.
6. Redis's creator is a person who doesn't deserve our support. He's constantly combative with experts who have give him good advice about how to improve Redis because he has a vision of "simplicity" which translates to "what I already understand."
Redis is a decent choice if you need all of its features. It's got a wide spectrum. But, if you don't need ALL of them, then pick a simpler and better designed system.
[+] [-] todd3834|6 years ago|reply
I ask this because I’ve run into a problem with this in the past.
[+] [-] hu3|6 years ago|reply
However, that struct contains data that can and will be GC'd once no longer used. This is the type of the variable: https://github.com/allegro/bigcache/blob/master/bigcache.go
[+] [-] KirinDave|6 years ago|reply
[+] [-] gigatexal|6 years ago|reply
[+] [-] correct_horse|6 years ago|reply
[+] [-] awestroke|6 years ago|reply
[+] [-] unknown|6 years ago|reply
[deleted]
[+] [-] deckarep|6 years ago|reply
Why does this help? Instead of keeping millions upon millions of items alive on the heap (leading to Go having to scan all such data) you can instead serialize/deserialize the data in a solution like this with usually minimal overhead. Storing your cached data in a solution like this suddenly gets rid of the need to have live pointers of data on the heap. This is because your data is now stored as []byte slice somewhere in the Cache data structure that this code uses. Finally, since packages like BigCache/Freecache are built with only a very small handful of heap objects the Go runtime performance can now go back to what it does best which is spending most of its time in your application logic.
If anyone has any doubts of this approach try it out...we saw dramatic differences with using a package like this vs a naive map of pointer based data or vs something like Hashicorps LRU datastructure.
The last service we applied this model changed CPU profile from running at around 900% to 400%. That was a big win in my book and practically cut our cluster size in half.
[+] [-] gfs|6 years ago|reply
So if I understand this correctly, Go does not do any recursive scanning of structures? Each unique bit of data owns it data for as long as it needs to?
[+] [-] erik_seaberg|6 years ago|reply