top | item 46512842

High-performance header-only container library for C++23 on x86-64

78 points| mattgodbolt | 1 month ago |github.com

From the readme:

The B+tree implementation provides significant performance improvements over industry standards for large trees. For some workloads with large trees, we've observed:

- vs Abseil B+tree: 2-5× faster across insert/find/erase operations - vs std::map: 2-5× faster across insert/find/erase operations

26 comments

order

plorkyeran|1 month ago

2-5x faster than both abseil's b+tree and std::map means that abseil's b+tree had to be the same performance as std::map for the tested workload. This is... very unusual. I have only ever seen it be much faster or moderately slower.

sedatk|1 month ago

Not necessarily. Insert could be 5x faster in one, and 2x faster in another, and there would still be orders of magnitude difference between both. 2x-5x is a long range.

ognarb|1 month ago

> History/Motivations This project started as an exploration of using AI agents for software development. Based on experience tuning systems using Abseil's B+tree, I was curious if performance could be improved through SIMD instructions, a customized allocator, and tunable node sizes. Claude proved surprisingly adept at helping implement this quickly, and the resulting B+tree showed compelling performance improvements, so I'm making it available here.

It seems the code was written with AI, I hope the author knows what he is doing. Last time I tried to use AI to optimize CPU-heavy C++ code (StackBlur) with SIMD, this failed :/

klaussilveira|1 month ago

Both Codex/Claude Code are terrible with C++. Not sure why that is, but they just spit out nonsense that creates more work than it helps me.

Have you tried to do any OpenGL or Vulkan work with it? Very frustrating.

React and HTML, though, pretty awesome.

leopoldj|1 month ago

I apologize if this is common knowledge. Modern C++ coding agents need to have a deep semantic understanding of the external libraries and header files. A simple RAG on the code base is not enough. For example, GitHub Copilot for VS Code and Visual Studio uses IDE language services like IntelliSense. To that extent, using a proper C++ IDE rather than a plain editor will improve the quality of suggested code. For example, if you're using VS Code, make sure the C/C++ Extension Pack is installed.

shihab|1 month ago

I'd love to see a breakdown of what exactly worked here, or better yet, PR to upstream Abseil that implements those ideas.

AI is always good at going from 0 to 80%, it's the last 20% it struggles with. It'd be interesting to see a claude-written code making its way to a well-established library.

dicroce|1 month ago

Ok, maybe someone here can clear this up for me. My understanding of B+tree's is that they are good for implementing indexes on disk because the fanout reduces disk seeks... what I don't understand is in memory b+trees... which most of the implementations I find are. What are the advantages of an in memory b+tree?

wffurr|1 month ago

https://github.com/abseil/abseil-cpp/blob/master/absl/contai... mentions that b-tree maps hold multiple values per node, which makes them more cache-friendly than the red-black trees used in std::map.

You use either container when you want a sorted associative map type, which I have not found many uses cases for in my work. I might have a handful of them versus many instances of vectors and unsorted associative maps, i.e. absl::flat_hash_map.

dataflow|1 month ago

Memory also has a seek penalty. It's called a cache miss penalty. It might be easier to think of them in general as penalties for nonlocality.