std::vector increasing peak memory

I am failed to understand the memory taken up by vector. Problem skeleton:

Consider an vector which is an collection of lists and lists is an collection of pointers. Exactly like:

std::vector<std::list<ABC*> > vec;
where ABC is my class. We work on 64bit machines, so size of pointer is 8 bytes.

At the start of my flow in the project, I resize this vector to an number so that I can store lists at respective indexes.

vec.resize(613284686);
At this point, capacity and size of the vector would be 613284686. Right. After resizing, I am inserting the lists at corresponding indexes as:

// Some where down in the program, make these lists. Simple push for now.
std::list<ABC*> l1;
l1.push_back(<pointer_to_class_ABC>);
l1.push_back(<pointer_to_class_ABC>);

// Copy the list at location
setInfo(613284686, l1);

void setInfo(uint64_t index, std::list<ABC*> list>) {
std::copy(list.begin(), list.end(), std::back_inserter(vec.at(index));
}
Alright. So inserting is done. Notable things are:

Size of vector is : 613284686 Entries in the vector is : 3638243731 // Calculated this by going over vector indexes and add the size of std::lists at each index.

Now, since there are 3638243731 entries of pointers, I would expect memory taken by this vector is ~30Gb. 3638243731 * 8(bytes) = ~30Gb.

BUT BUT When I have this data in memory, memory peaks to, 400G.

And then I clear up this vector with:

std::vector<std::list<nl_net> >& ccInfo = getVec(); // getVec defined somewhere and return me original vec.
std::vector<std::list<nl_net> >::iterator it = ccInfo.begin();
for(; it != ccInfo.end(); ++it) {
(*it).clear();
}

ccInfo.clear(); // Since it is an reference
std::vector<std::list<nl_net> >().swap(ccInfo); // This makes the capacity of the vector 0.
Well, after clearing up this vector, memory drops down to 100G. That is too much holding from an vector.

Would you all like to correct me what I am failing to understand here?

P.S. I can not reproduce it on smaller cases and it is coming in my project.
Just to store a vector of 613284686 empty lists requires vec.size() * sizeof(std::list<ABC*>) bytes of memory. If the size of each list is 24 bytes this means it requires 14 718 832 464 bytes (14 GB).

To store each list node it needs at least two pointers (next and prev), and the element itself (ABC*) which is another pointer in this case. The size of three pointers on a 64bit machine is 24 bytes so the memory needed for all nodes becomes 3638243731 * 24 bytes = 87 317 849 544 bytes (87 GB).

14 GB + 87 GB = 101 GB

Since each list node is dynamically allocated it will also have to store some extra information (e.g. the size of each allocation) which means the real memory usage is expected to be slightly higher.

This does not take into account the size for each ABC object.

Edit: fixed some mistakes with the calculations
Last edited on
@bhargavah

I'd like to add to @Peter87's point by saying that a vector of pointers does not call delete on the pointers it holds, so you have a memory leak.

Storage of this kind would require an allocation for each list, and thus would require you to loop through the vector to delete the lists.

If that propolgated, as you said these are std::list< ABC * >, each pointer in the list must be sequence to call delete on that ABC * before those lists themselves are destroyed, or the ABC's they hold will leak.

Are you using a computer with 400G of RAM? Are you relying on virtual memory to satisfy the demand above the physical RAM stored? This is known to be a very, very poor strategy.

Can you show the storage strategy a little more clearly, but from the perspective of the ABC objects?

For example, how many ABC objects are you actually creating, and are they referenced by an index number to get the list?

From our perspective, where you say the peak memory consumption was 400 G of RAM, and we know there is at least 100 GB of RAM consumed by just the containment strategy without accounting, yet, for the links in the lists, there is reason to think you have merely about 250 G in ABC data.

There are likely far better strategies for doing this, but without some more information (and I sense it isn't much), we can only guess.

A large number of solutions come to mind (I could write two chapters or more just out of what come to mind so far).

To fit this correctly we need to understand the nature of the target machine you're using for this. If you do have a machine with several hundred G of RAM, that's fine (some of us have seen those), but it isn't all that likely. 64 G and 128 G are more likely, but for a great many situations like this there is no choice but to seriously consider well structured methods for using disk storage to augment RAM.

The key is to understand how the data is to be processed. What is done to process/traverse this data? What are the performance requirements for doing this?
c++ has a couple of containers that do hash table type work. IT looks like you are trying to do this the DIY array index way, which is great for smaller problems but you can see that it fails here. That is one way to do it.

another way to do it is a bucketed tables approach where you have a table of hash tables. so instead of 613284686, you might say divide that index by a power of 10 and put it in one of 61328 buckets, and then inside those buckets have a more useful container that you can do a binary search on, and still get great response time at a fraction of the memory cost.

this is called... the time/space tradeoff. If you have infinite space, you can do a lot of big things in O(1) or O(n) that would take much longer if you have 1 mb of memory.

We have servers where I work with the kind of ram you need, but if you sucked that much out of one the admin would come looking for your head with an axe. This is not the right approach, unless you have a dedicated big box on a very special huge real time problem with a fat budget.
Last edited on
you can do a lot of big things in O(1) or O(n) that would take much longer if you have 1 mb of memory.


Some cases turn seconds into days (or weeks)
by the way
1
2
3
4
5
6
7
8
9
10
vec.resize(613284686);
setInfo(613284686, l1);

void setInfo(uint64_t index, std::list<ABC*> list>) {
   std::copy(
      list.begin(),
      list.end(),
      std::back_inserter(vec.at(index)) //out of bounds
   );
}
Thanks to all for replying. I understood the limitation and the next steps to do. Cheers!
Topic archived. No new replies allowed.