Increase stack size. Why is it bad?



I want a function that creates a char array that can handle 1GB data, because I like idea that allocation and free gets handled automatically.

If it's NOT possible to increase stack size that much, what's the reason? And if it IS possible why would it be a bad idea?


Best regards
Volang
I like idea that allocation and free gets handled automatically
 
std::unique_ptr<char[]> buffer(new char[1 << 30]);
the stack limit is determined by some system level things. The OS probably has a maximum amount, the amount of ram on your machine matters (you can't ask for a GB on a 128MB system), and the compiler flags (the compiler can set the stack size within the other limits).

The stack size is limited because you want to be able to run many programs at once: if all 1000 of the programs running on your computer had all asked for 1GB up front (stack is UP FRONT, its allocated at program launch) you would need quite the machine! By limiting it, many programs can run at once without one or two of them stealing all the resources.

Its not a 'bad idea' so much as 'you can't do it past some point'. C++, unlike the vast majority of languages, does not have a lot of places where it says "you can't do that". This is one of those few places, an the reason, mostly, is the limits are imposed from on high and the language can't do much about this.

That said, I routinely read 4-5 GB files into the heap. memory allocation isnt that big a deal anymore. And even 'back in the day' it really wasnt as bad as people make it out to be; I had a 'dumb pointer' template class that simply called delete when it went out of scope, and used it on everything for years back then. It didn't do anything else, just constructor and destructor. Which is pretty much what the modern pointers do, I think. (Actually, It did one other thing, it prevented the user from leaking by reallocating it, because you could only allocate it at construction and could not re-assign the pointer to another block etc).
Last edited on
Use std::string instead of a C string (char array).

The string object itself is stored on the stack but it points to memory that is on the heap.

https://stackoverflow.com/questions/42049778/where-is-a-stdstring-allocated-in-memory

There are practical limits to the amount of heap memory the OS can access, and that can change if the program is compiled as 32bit or 64bit.

1
2
3
4
5
6
7
8
9
#include <iostream>
#include <string>

int main()
{
   std::string str;

   std::cout << str.size() << ", " << str.max_size() << '\n';
}


Visual Studio 2019, compiled as 32bit:
0, 2147483647

That's about 2GB. Theoretically available.

Visual Studio 2019, compiled as 64bit:
0, 9223372036854775807

That is 9EB (Exa) of memory a string can possibly hold.

More than you will need for a 1GB file.

If you want to have easy access to each line in the file, use std::vector<std::string>.

1
2
3
4
5
6
7
8
9
10
#include <iostream>
#include <string>
#include <vector>

int main()
{
   std::vector<std::string> vec;

   std::cout << vec.max_size() << '\n';
}

32bit:
178956970

64bit:
576460752303423487

Having a vector of C++ strings will really chew up available memory fast. Memory mapping a 1GB file this way isn't likely to blow the stack or the heap.
it may not be a c_string. could be a char array meaning 'byte array' (binary file, for example). I don't use c++ strings for byte arrays (more out of habit than anything else). Don't want to forget and do text things to them.

memory fragmentation may prevent making one more than about 3/4 the size of your ram even if dynamic. The stack is a solid block, another reason the OS does not like making it too large.
Last edited on
jonnin wrote:
it may not be a c_string

Calling it a char array is a rather odd way IMO to say byte array, i.e. binary file format.

Dealing with bytes vs. printable characters does requires differing approaches for memory storage.

Without knowing the actual format, and how the data is to be manipulated, speculating on things is less than 100% productive.
@helios, is there a major problem with std::array<char, 1 << 30> buffer; vs. using a std::unique_ptr?

Just as efficient as a regular char array with some added functionality that make using it similar to the other C++ containers.

Of course std::array is compile-time size fixed, and std::array is on the stack, so there's that.

Interesting way of addressing 1GB, 1 << 30, btw. :)
Last edited on
Calling it a char array is a rather odd way IMO to say byte array, i.e. binary file format.

I used to do that almost exclusively. (well, char* allocated to file size, actually).
its what read(char*, size, ...) and write(char*, size, ..) expect, and [index] gets you to whatever byte offset in the data to extract fields.

what do you use? (could be one of the aliases for char, BYTE or something?).
Last edited on
Terminology, not usage, jonnin.

What is used is not the same as calling it something that can be misconstrued without context.
@helios, is there a major problem with std::array<char, 1 << 30> buffer; vs. using a std::unique_ptr?
It would be on the stack. Might as well just use a raw array.
On linux you can easily set the stack size to 'unlimited'. I think the biggest "danger" in doing so is that if you happen to accidentally write an infinite recursion (which is quite easy to do) your program will use more and more memory and probably get very slow because it starts swapping memory to disk (I hate when that happens).
Running out of heap can usually be handled gracefully though try/catch.

Running out of stack is an insta-death "do not pass go, do not collect $200" deal.
salem c wrote:
Running out of heap can usually be handled gracefully though try/catch.

I've read/seen some opinions from "experts" that using exception handling is a design flaw in one's code.

Most of them are "If it's good enough for C then it's good enough for me" types that write C++ code that looks at best as being C++98.

For me, IMO, the people who do the language standardization believed it has a valid place, and I happen to agree.

I am by no means even remotely a C++ expert, having very badly self-taught myself, and continuing to do it with all the new-fangled changes C++11 and later standards have made.
Last edited on
helios wrote:
It would be on the stack. Might as well just use a raw array.

As I was typing my question I did a quickie code snippet with VS 2019 and std::array. No errors when compiled, but it sure did whinge 1GB of stack memory was more than VS was willing to allow the program. 32bit or 64bit. It suggested using heap memory.
Yes, that's because std::array is simply
1
2
3
4
5
template <typename T, size_t N>
class array{
    T m_data[N];
    //etc.
};
I've read/seen some opinions from "experts" that using exception handling is a design flaw in one's code.


Sure, if they can write magic code that can magically handle all possible situations and they don't want to have any kind of protection or handling against anything ever throwing :)

With the honorable exception of giving clear guidelines to very inexperienced beginners who don't want nuance but just want to know how to do it right for 99% of cases (for example simply telling someone with five minutes experience to stop using char arrays and just use a string, or that they should basically never use new), I take a very dubious view of any expert announcing that usage of <language_feature_X> is always bad. Every feature is just a tool in the toolbox; right tool in the right place at the right time.
Topic archived. No new replies allowed.