(no title)
harrid | 2 years ago
Also note that all <algorithm>s have built-in fully automatic parallelism via <execution>, a massively underused feature. In typical CPP fashion though, their newer views:: counterparts lack those overloads for the moment.
csjh|2 years ago
harrid|2 years ago
lionkor|2 years ago
coffeeaddict1|2 years ago
cyber_kinetist|2 years ago
Arelius|2 years ago
jcelerier|2 years ago
(Not std::map, at the time it must have been something like tsl:: hopscotch_map).
Note also that nowadays for instance boost comes with state-of-the-art flat_map and flat_unordered_map which gives both the cache coherency for small sizes and the algorithmic characteristics of various kinds of maps
CyberDildonics|2 years ago
This is a very poor way to choose a data structure. How many items you want to store is not what someone should be thinking about.
How you are going to access it is what is important. Looping through it - vector. Random access - hash map. These two data structures are what people need 90% of the time.
If you are putting data on the heap it is already because you don't know how many items you want to store.
maldev|2 years ago
papichulo2023|2 years ago
harrid|2 years ago
(But I'm about to move it to GPU)
varjag|2 years ago
01100011|2 years ago
pca006132|2 years ago
baq|2 years ago
…but in order for this to really matter, communication is required, since even the best developers don’t scale.
elromulous|2 years ago
https://youtu.be/YQs6IC-vgmo
CyberDildonics|2 years ago
This is never a scenario that should happen, because if you are going to retrieve an arbitrary element it should be in a hash map or sorted map.