Fast destroy (=clear) std::unordered_map (or concurrent_unordered_map)

Dear all,

I encountered a high overhead of destruction (= clear) for std::unordered_map (& concurrent_unordered_map).

I use std::unordered_map<int, std::unordered_map<int, ElementType> >, so that more than usual, unordered_map is allocated.

I guess this is due to separate memory allocation of each element in std::unordered_map.

Does someone know how to speed up such a case?
(Some fast allocator? Memory pool? flyweight? and so on)
All node-based data structures have a huge overhead with allocations and de-allocations.
OOPS, std::unordered_map probably uses a hashtable.

On option to speed things up are polymorphic memory resources.
https://www.bfilipek.com/2020/08/pmr-dbg.html

This talk might be interesting: https://www.youtube.com/watch?v=q6A7cKFXjY0
Though the example uses a vector, the principle should work also for other containers.

Are you sure that std::unordered_map<int, std::unordered_map<int, ElementType> > is the best choice? If you tell us the problem you want to solve someone might be able to recommend better ways.

Last edited on
Have you considered using pair<int,int> as the key?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#include <iostream>
#include <string>
#include <unordered_map>
#include <utility>
#include <functional> // std::hash
using namespace std;

struct pair_hash {
    template<typename T1, typename T2>
    size_t operator()(const pair<T1, T2>& p) const {
        auto h1 = hash<T1>{}(p.first), h2 = hash<T2>{}(p.second);
        return h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2);
    }
};

int main() {
    unordered_map<pair<int, int>, string, pair_hash> m;
    m[make_pair(4, 3)] = "hi";
    cout << m[make_pair(4, 3)] << '\n';
}

Or if the range of your ints is small enough, you could create a single int out of them. E.g., if the ints are called x and y and the range of each is 0 to 999, you could use the key x * 1000 + y.
Or if the ints are 32 bits you could stuff them into a 64 bit integer.
Last edited on
unrelated but destroying things, and creating them, is often expensive if you do it over and over, eg in a loop, or a loop that calls a function that does these things.

finding a way to not create/destroy too much can give large rewards.
Thank you, everyone.

Every advice is useful for me.
It is interesting about polymorphic memory resource, which is new to me and I do not know it is standard.
(But, my compiler seems not to recognize it even though partially C++17 supports)

But, I understand that memory pool reserved in the previous stage works well,
thank you so much for your cooperation.
I found library for polymorphic memory resources is in boost,
I will try it.

Kind regards
Topic archived. No new replies allowed.