Benchmarks
boost::unordered_[multi]set
All benchmarks were created using unordered_set<unsigned int>
(non-duplicate) and unordered_multiset<unsigned int>
(duplicate). The source code can be found here.
The insertion benchmarks insert n
random values, where n
is between 10,000 and 3 million. For the duplicated benchmarks, the same random values are repeated an average of 5 times.
The erasure benchmarks erase all n
elements randomly until the container is empty. Erasure by key uses erase(const key_type&)
to remove entire groups of equivalent elements in each operation.
The successful lookup benchmarks are done by looking up all n
values, in their original insertion order.
The unsuccessful lookup benchmarks use n
randomly generated integers but using a different seed value.
GCC 12 + libstdc++-v3, x64
Insertion
non-duplicate elements |
duplicate elements |
duplicate elements, |
---|
non-duplicate elements, |
duplicate elements, |
duplicate elements, |
---|
Clang 15 + libc++, x64
Insertion
non-duplicate elements |
duplicate elements |
duplicate elements, |
---|
non-duplicate elements, |
duplicate elements, |
duplicate elements, |
---|
Visual Studio 2022 + Dinkumware, x64
Insertion
non-duplicate elements |
duplicate elements |
duplicate elements, |
---|
non-duplicate elements, |
duplicate elements, |
duplicate elements, |
---|
boost::unordered_(flat|node)_map
All benchmarks were created using:
-
absl::flat_hash_map<uint64_t, uint64_t>
-
boost::unordered_map<uint64_t, uint64_t>
-
boost::unordered_flat_map<uint64_t, uint64_t>
-
boost::unordered_node_map<uint64_t, uint64_t>
The source code can be found here.
The insertion benchmarks insert n
random values, where n
is between 10,000 and 10 million.
The erasure benchmarks erase traverse the n
elements and erase those with odd key (50% on average).
The successful lookup benchmarks are done by looking up all n
values, in their original insertion order.
The unsuccessful lookup benchmarks use n
randomly generated integers but using a different seed value.
boost::concurrent_(flat|node)_map
All benchmarks were created using:
-
oneapi::tbb::concurrent_hash_map<int, int>
-
gtl::parallel_flat_hash_map<int, int>
with 64 submaps -
boost::concurrent_flat_map<int, int>
-
boost::concurrent_node_map<int, int>
The source code can be found here.
The benchmarks exercise a number of threads T (between 1 and 16) concurrently performing operations randomly chosen among update, successful lookup and unsuccessful lookup. The keys used in the operations follow a Zipf distribution with different skew parameters: the higher the skew, the more concentrated are the keys in the lower values of the covered range.
boost::concurrent_flat_map
and boost::concurrent_node_map
are exercised using both regular and bulk visitation:
in the latter case, lookup keys are buffered in a local array and then processed at
once each time the buffer reaches bulk_visit_size
.
GCC 12, x64
500k updates, 4.5M lookups |
500k updates, 4.5M lookups |
500k updates, 4.5M lookups |
---|
5M updates, 45M lookups |
5M updates, 45M lookups |
5M updates, 45M lookups |
---|
Clang 15, x64
500k updates, 4.5M lookups |
500k updates, 4.5M lookups |
500k updates, 4.5M lookups |
---|
5M updates, 45M lookups |
5M updates, 45M lookups |
5M updates, 45M lookups |
---|
Visual Studio 2022, x64
500k updates, 4.5M lookups |
500k updates, 4.5M lookups |
500k updates, 4.5M lookups |
---|
5M updates, 45M lookups |
5M updates, 45M lookups |
5M updates, 45M lookups |
---|
Clang 12, ARM64
500k updates, 4.5M lookups |
500k updates, 4.5M lookups |
500k updates, 4.5M lookups |
---|
5M updates, 45M lookups |
5M updates, 45M lookups |
5M updates, 45M lookups |
---|
GCC 12, x86
500k updates, 4.5M lookups |
500k updates, 4.5M lookups |
500k updates, 4.5M lookups |
---|
5M updates, 45M lookups |
5M updates, 45M lookups |
5M updates, 45M lookups |
---|