FWIW i am pretty sure Java's HashMap has the same behaviour - it grows the table, but never shrinks it. Even if you call .clear(), it just clears out the table, rather than throwing the table away.
I imagine there are lots of scenarios in which this is what you want, because after emptying the map, you're going to re-fill it, and it saves reallocating the table. But it would be frustrating in a scenario when that isn't what you want.
If a map has this behaviour, i would say that the most important thing is that it should be clearly documented (Java's isn't). The second most important thing is that there should be a way to get round it - either a .clearHarder() method which throws away the table, or a .compact() method which downsizes it while retaining the content.
Rust uses "shrink_to_fit()". Personally never had to use it, but you always end up scrolling by the backing allocation management for all the standard collections when looking through the docs.
> Shrinks the capacity of the map as much as possible. It will drop down as much as possible while maintaining the internal rules and possibly leaving some space in accordance with the resize policy.
Wouldn't reallocating the map be very cheap when running in a JVM? Yes, it isn't something to do in the hot path, but surely it should be faster than a direct malloc(), right? I'm being very naive and ready to be proven wrong.
The main difference with Go is that the table of a HashMap in Java is a table of pointers, not a table of structs, so the overhead to not shrink the table is less an issue.
You're right that Java's Hashmap only ever resizes upwards - the most common use case. However, if there's a need to remove that memory after clearing() then the typical use case is to do something that allows the GC to sort it.
E.g.,
var map = new HashMap<Bla, Bla>();
// Many things added to map
// Many but not all things removed from map
// to shrink it to size of remaining items
map = new HashMap<>(map);
Yeah, it's not nice, but the people who wrote the JDK are far smarter than me, so I figure they optimised for the 99% use-case, not the 1%.
>The second most important thing is that there should be a way to get round it - either a .clearHarder() method which throws away the table, or a .compact() method which downsizes it while retaining the content.
Wouldn't just creating a new map be a pretty good way of doing it? If the reduction in size is significant, then the cost of the new allocation will be small compared to the cost of all the allocations you've done to build the original map.
> If a map has this behaviour, i would say that the most important thing is that it should be clearly documented
Iirc, most hash table implementations don't automatically shrink. More documentation is always nice, but isn't not automatically shrinking kind of the expected "default" behavior?
Yes, Go maps never shrink. This is good for most use cases in practice.
Because in practice, map entry deletions happen seldom.
And when map entry deletions are needed, users often hope maps don't shrink, to avoid potential later unnecessary memory allocations and entry moves.
For example, I only do map entry deletions in one of my projects,
In the project, I clear all entries of a map and re-use the map to avoid making new allocations. The current design satisfies my need well.
This is more an optimization than a memory leak.
To avoid the kind-of memory leak, just make a new map and discard the old one.
what happened to a memory leak being some memory that was allocated but had no reference to it so couldn't be freed? If you can copy the map and release it and the memory usage drops, there is no leak?
A memory leak has always meant "this program keeps allocating more memory as it runs, even though it's not being asked to store anything new". That is equivalent to saying that a program has a memory leak when it fails to free memory that is no longer needed, not just memory that is no longer reachable.
For example, a common example of a memory leak is adding items to a "cache" without any mechanism that evicts items from the cache in any scenario. The "cache" is thus not a cache, but a memory leak (a common implementation of this leaking scenario is that items are put in a map, but never removed from the map).
Memory leak has never, as far as I know, referred to the specific case of memory that is no longer accessible from program code to be freed. In fact, this definition doesn't even make sense from a runtime system perspective, since even in C, the memory is actually always still reachable - from malloc()'s internal data structures.
Those pretty much can't happen in garbage collected languages, so the usage of the term has been widened to include things like this. I agree it's a shame.
This all looks like reasonable implementation behavior which will give optimal runtime performance in most "common" cases. If one really wants or needs a map that'll free memory as it shrinks (and sure, for some folks, that'd be super useful) one's always free to just implement your own.
> Also, in this case, the amount of required memory is less significant during peak times due to some optimizations to reduce the memory consumed.
Pretty sure it’s the same: because of collisions, maps always have a certain fraction of empty slots (much more so than vectors). I’ve not heard of Go maps being very high load factors, so they probably resize around 50% or so full. I didn’t check any of the numbers, I’ll assume 50% load factor, doubling on resize, and entries of {hash, key, value} with a word-sized (8 bytes) hash, but some and likely all of those are likely off.
Anyway with a load factor of 50% and doubling at any point you’d have 2n to 4n slots in the map total (depending whether you’re nearing resize or just went through one). And thus if you make entries smaller, you make all those overhead slots smaller as well.
hash + int + blob would be 8+8+128 144 bytes, a million of those is 144MB, x2 is 288 and x4 is 576, so the observed bounds are pretty close: IIRC Go maps use some form of separate chaining (though not “trivial” linked lists), I assume those do get reclaimed as entries are removed even if the map itself does not get shrunk, which would explain why memory consumption goes down as elements are removed from the initial map.
With a pointer each “entry” is 24 bytes instead, for a map overhead of 48~96MB (so clearly Go maps use either a higher load factor or a more efficient storage than the trivial scheme being assumed here, possibly both).
The actual data accounts for 128M plus some, there’s a 144M difference between the full and the emptied pointer-maps, which would account for some chain links being removed, and maybe some object headers / allocator overhead for the on-heap arrays.
Let's be kind and assume that prior to removal, the map has just trebled in size and the extra space wasn't used. Doesn't this imply that a map has an overhead of about 100 bytes per key/value pair? How can this be so?
There's some some waste involved in not pre-allocating the map to begin with. Check the output of this variation (https://go.dev/play/p/vQwg3GajzXx -- n shrunk to 1,000 to stay within the playground's memory constraints) which fills the map again after the GC. You'll see the map doesn't grow back to the max size.
And if you specify the size of the map up front in the `make()` function, it never grows or shrinks (or not significantly): https://go.dev/play/p/AGW35kMMOc5
IIRC, what's happening is that whenever the map hits what was pre-allocated, it expands by a certain growth factor of the current size. So at some point between 1 and 1,000,000 a big chunk was allocated that went over what was necessary for 1,000,000 entries. In my testing it happened around i == 851_500:
i don't see any issue here. same behavior can be achieved with slices:
foo := make([]int, 0, 1000)
for {
for k := range bar {
foo = append(foo, k)
}
foo = foo[:0]
}
the slice will grow as much as the largest dataset. map will be the same. you need to let go of it to be GCd and create a new one.
The point is that you can remove the entries from the map, and the map won't ever shrink. If you're using large value type in the map[1], the map's dead storage will be large - by a functionally unbound amount. Most sane collection libraries shrink their backing store after some sufficiently large portion becomes dead.
[1] I would argue a general purpose hash table/map should really switch to using a hash code=>index mapping automatically if the value type size is sufficiently large.
It does, and as the article mentions, it’ll collect the map elements. But it doesn’t collect the map infrastructure that grew to accommodate the elements, and doesn’t shrink when the elements are removed, because the map never stops referencing them.
cglong|3 years ago
[1]: https://github.com/golang/go/issues/54766
unknown|3 years ago
[deleted]
twic|3 years ago
I imagine there are lots of scenarios in which this is what you want, because after emptying the map, you're going to re-fill it, and it saves reallocating the table. But it would be frustrating in a scenario when that isn't what you want.
If a map has this behaviour, i would say that the most important thing is that it should be clearly documented (Java's isn't). The second most important thing is that there should be a way to get round it - either a .clearHarder() method which throws away the table, or a .compact() method which downsizes it while retaining the content.
Gwypaas|3 years ago
> Shrinks the capacity of the map as much as possible. It will drop down as much as possible while maintaining the internal rules and possibly leaving some space in accordance with the resize policy.
https://doc.rust-lang.org/std/collections/struct.HashMap.htm...
And the docs makes it clear that "clear()" only removes the elements. Giving a hint of where to go next if you stumble upon the issue in the OP.
> "Clears the map, removing all key-value pairs. Keeps the allocated memory for reuse."
https://doc.rust-lang.org/std/collections/struct.HashMap.htm...
eptcyka|3 years ago
_old_dude_|3 years ago
EdwardDiego|3 years ago
E.g.,
Yeah, it's not nice, but the people who wrote the JDK are far smarter than me, so I figure they optimised for the 99% use-case, not the 1%.foldr|3 years ago
Wouldn't just creating a new map be a pretty good way of doing it? If the reduction in size is significant, then the cost of the new allocation will be small compared to the cost of all the allocations you've done to build the original map.
creata|3 years ago
Iirc, most hash table implementations don't automatically shrink. More documentation is always nice, but isn't not automatically shrinking kind of the expected "default" behavior?
tapirl|3 years ago
This is more an optimization than a memory leak.
To avoid the kind-of memory leak, just make a new map and discard the old one.
esprehn|3 years ago
maps being magic without a real interface means exposing new API surface is difficult though.
jimbokun|3 years ago
slantedview|3 years ago
Why is it that every shortcoming of Go is spun by its advocates into something that is actually a good thing?
mjpa86|3 years ago
fnord123|3 years ago
If you put data into a hash map and forget the key, you leaked.
simiones|3 years ago
For example, a common example of a memory leak is adding items to a "cache" without any mechanism that evicts items from the cache in any scenario. The "cache" is thus not a cache, but a memory leak (a common implementation of this leaking scenario is that items are put in a map, but never removed from the map).
Memory leak has never, as far as I know, referred to the specific case of memory that is no longer accessible from program code to be freed. In fact, this definition doesn't even make sense from a runtime system perspective, since even in C, the memory is actually always still reachable - from malloc()'s internal data structures.
twic|3 years ago
Roedy Green coined the name "packratting" for this modern kind of memory leak: https://www.mindprod.com/jgloss/packratting.html
rob74|3 years ago
ilyt|3 years ago
Shame that quirk wasn't documented
eschneider|3 years ago
eptcyka|3 years ago
masklinn|3 years ago
Pretty sure it’s the same: because of collisions, maps always have a certain fraction of empty slots (much more so than vectors). I’ve not heard of Go maps being very high load factors, so they probably resize around 50% or so full. I didn’t check any of the numbers, I’ll assume 50% load factor, doubling on resize, and entries of {hash, key, value} with a word-sized (8 bytes) hash, but some and likely all of those are likely off.
Anyway with a load factor of 50% and doubling at any point you’d have 2n to 4n slots in the map total (depending whether you’re nearing resize or just went through one). And thus if you make entries smaller, you make all those overhead slots smaller as well.
hash + int + blob would be 8+8+128 144 bytes, a million of those is 144MB, x2 is 288 and x4 is 576, so the observed bounds are pretty close: IIRC Go maps use some form of separate chaining (though not “trivial” linked lists), I assume those do get reclaimed as entries are removed even if the map itself does not get shrunk, which would explain why memory consumption goes down as elements are removed from the initial map.
With a pointer each “entry” is 24 bytes instead, for a map overhead of 48~96MB (so clearly Go maps use either a higher load factor or a more efficient storage than the trivial scheme being assumed here, possibly both).
The actual data accounts for 128M plus some, there’s a 144M difference between the full and the emptied pointer-maps, which would account for some chain links being removed, and maybe some object headers / allocator overhead for the on-heap arrays.
LukeShu|3 years ago
Go uses a load factor of 6.5 (expressed as average load of a bucket; where buckets have a maximum load of 8; i.e. 81% full) (ref: https://github.com/golang/go/blob/go1.19.3/src/runtime/map.g... )
> I’ll assume … doubling on resize
Indeed (ref: https://github.com/golang/go/blob/go1.19.3/src/runtime/map.g...)
morelisp|3 years ago
SeanLuke|3 years ago
Let's be kind and assume that prior to removal, the map has just trebled in size and the extra space wasn't used. Doesn't this imply that a map has an overhead of about 100 bytes per key/value pair? How can this be so?
skywhopper|3 years ago
And if you specify the size of the map up front in the `make()` function, it never grows or shrinks (or not significantly): https://go.dev/play/p/AGW35kMMOc5
IIRC, what's happening is that whenever the map hits what was pre-allocated, it expands by a certain growth factor of the current size. So at some point between 1 and 1,000,000 a big chunk was allocated that went over what was necessary for 1,000,000 entries. In my testing it happened around i == 851_500:
EdiX|3 years ago
hknmtt|3 years ago
the slice will grow as much as the largest dataset. map will be the same. you need to let go of it to be GCd and create a new one.
olliej|3 years ago
[1] I would argue a general purpose hash table/map should really switch to using a hash code=>index mapping automatically if the value type size is sufficiently large.
sys_64738|3 years ago
avianlyric|3 years ago