top | item 45370337

(no title)

truth_seeker | 5 months ago

>Note that the best-case scenario is the elimination of the overheads above to 0, which is at most ~10% in these particular benchmarks. Thus, it's helpful to consider the proportion of GC overhead eliminated relative to that 10% (so, 7% reduction means 70% GC overhead reduction).

Wow. amazing to see of off-heap allocation can be that good

https://go.googlesource.com/proposal/+/refs/heads/master/des...

discuss

order

pjmlp|5 months ago

Meanwhile Java and .NET have had off-heap and arenas for a while now.

Which goes to show how Go could be much better, if being designed with the learnings of others taken into account.

The adoption of runtime.KeepAlive() [0], and the related runtime.AddCleanup() as replacement for finalizers are also learnings from other languages [1].

[0] - https://learn.microsoft.com/en-us/dotnet/api/system.gc.keepa...

[1] - https://openjdk.org/jeps/421

truth_seeker|5 months ago

What a coincedence ! :)

Recently used MemorySegment in Java, it is extremely good. Just yesterday i implemented Map and List interface using MemorySegment as backing store for batch operations instead of using OpenHFT stuff.

Tried -XX:TLABSize before but wasnt getting the deserved performance.

Not sure about .NET though, havent used since last decade.