C++ vector reaching it's maximum size

Hi,

My program enters the size of the vector from the user and then creates a vector of vectors (lets say SIZE1). In addition the user enters the number of vector of vectors he needs (lets say SIZE2) as follows:

class Vectors
{
// member functions goes here
private
vector<vector<int>> vectors;
vector<int>::iterator it;
};

void main()
{
...
A = new Vectors*[SIZE2];
for (int i=0; i<SIZE2; i++)
A[i] = new Vectors[SIZE1];
...

With a few calculations and insertions to my vector (vector of vectors)... the program works fine and gives me the results...

However, with huge calculations and insertions the program stops working and gives me this message

"Unhandled exception at at 0x770DC41F in Test.exe: Microsoft C++ exception:std:bad_alloc at memory location 0x001CEADC"

Thus, it seems that the vector reached it's maximum size... I tried to use reserve() but did not work

I read that "By default, when you run a 64-bit managed application on a 64-bit Windows operating system, you can create an object of no more than 2 gigabytes (GB). However, in the .NET Framework 4.5, you can increase this limit"

What do you think would be the best option for me to do (note my program is very long and complex)(I'm currently using Microsoft Visual Studio 2012 32Win application):

1. convert my program to the .NET Framework (C++)

2. convert my program to C# in case c# can help

3. do any settings on my computer (if it helps - my workstation has a 3.6GHZ xion processor with 32RAM

4. convert to another version of C++ that does not have any restriction on the size of the array (if available)

Please note that I never worked neither with the .NET framework nor C#

Best regards,

Nouf
You have an array of arrays of your class which contains vectors of vectors. I think you are accidentally mixing two different things here.
My program is very complex...
Anyhow, it is working fine with small dataset... but is not working for large dataset...
You are running out of memory for your data; C# won't be able to help.

If the data set is sparse (for instance bulk of the values are zeroes), the amount of memory needed can be reduced by using some sparsity technique.

For instance:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#include <unordered_map>
#include <iostream>

template < typename T >
using vector = std::unordered_map< std::size_t, T > ;

int main ()
{
    vector< vector< vector<int> > > vecs ;

    vecs[9999][19999][229999] = 9 ;
    vecs[5555][4444][3333] = 543 ;
    vecs[100][88888888][25] = 1234 ;
    // etc

    int sum = 0 ;
    for( const auto& p1 : vecs )
        for( const auto& p2 : p1.second )
            for( const auto& p3 : p2.second ) sum += p3.second ;
    std::cout << sum << '\n' ;
}

http://coliru.stacked-crooked.com/a/c982751e4755a6ad

In practice, you might want to use a library, for instance:
Boost uBLAS: http://www.boost.org/doc/libs/1_55_0/libs/numeric/ublas/doc/index.htm
SparseLib++: http://math.nist.gov/sparselib++/
Topic archived. No new replies allowed.