Limiting Memory to increase efficiency

The way I understand it, declaring variables inside functions will cause a program to create an instance of that variable evry time the function is run. Is there a way to.... not do this?? I'm trying to decrease the amount of memory my program uses.

Also, I've tried Gloabal declarations. The problem is that if I declare the same variables in more than 1 file, they conflict with each other, but if i declare them in a single file, they aren't carried over to the rest. This is not ideal.
Last edited on
Declaring variables in the scope they are used is efficient since the memory will be released on end of scope, while declaring global isn't efficient as the memory isn't released until application exit.

Post what you feel is running inefficiently and maybe we can give suggestions.
Last edited on
Ah. ok. I was under the impression that variables declared within a function are 'created', and the memory for them is not un-allocated after it is complete. This is due to a combination of my understanding of variables within a function, and the memory fluctuations observed while running my programs.

By 'released', i take it you mean the memory allocated for the variables in a function are completely un-allocated upon the function's termination.

Thank you for the clarification.
Ah. ok. I was under the impression that variables declared within a function are 'created', and the memory for them is not un-allocated after it is complete.

AFAIK, your understanding is correct.

All memory for local variables is "allocated" upon function entry. However objects are not constructed until the actual declaration line. Therefore it is typically more efficient to declare variables/object in the smallest scope possible, that way their construction/destruction can be skipped if the object is never used.

There are exceptions, of course. Constructing an object inside of a loop body would result in repeated construction/destruction which is usually very wasteful.

That said....

I put the term "allocated" in quotes above because it's not what you think. Local variables are put on the stack. They're not allocated on the heap.

The stack operates differently from the heap. Heap memory is requested as needed, so the more you consume, the more memory your program uses.

Stack memory is taken from a pre-allocated block of memory that has been designated for stack space. Consuming more stack space does not cause your program to use more memory in the eyes of the OS. It's more like you're making use of memory your program has already allocated. Therefore it is not at all inefficient for the program to "allocate" space for all local vars up front.

Reducing the amount of stack space used by your program is not really an optimization. It will not increase speed or reduce memory consumption. Stack space is practically free and there is no downside to using more of it.

Until, of course, you run out. But that typically doesn't happen unless you are doing very deep recursion and/or putting ridiculously large arrays/objects on the stack.
Disch's anwser was very good (thanks, I also found it helpful)

If you really must decrease your stack allocation, and are / can use multiple processes, the CreateThread function allows you to specify the stack size for the new process thread... I am not too familiar with it as I am still working on the function pointer part ( :/) but perhaps it would be possible to create a new thread or process with a smaller stack and remove the old one? I dont know for sure, and my advice would probably be not to worry about it, but if it really matters to you perhaps you could look into something like that.

There might also be a way to change your stack size without creating a new process and killing the old one (assuming killing the parent doesnt kill the child too)

Heres a link refering to thread stack size, perhaps it is of use:
Last edited on
Spawning a new process just to modify the stack size of your program is pretty hackish to me. I wouldn't recommend it.

New threads don't help either, since new threads just allocate another stack (each thread has its own stack). And you cannot remove the main process thread without also terminating the process (IIRC, but I'm not 100% on that. Either way I don't recommend doing it).

If you really must control the stack size of your program, look at your linker's options/settings. In VS this can be done in the project properties | Linker | System | "Stack Reserve Size".

But I REALLY recommend against doing any of this unless you really know what you're doing. I must reiterate that stack space is not an optimization issue (except in very specific circumstances). Your optimization efforts are better spent elsewhere.
Thank you for your responses.

Also, it's not so much speed as much as it is learning and optimization.
It's not about speed, but it's about optimization?

o_O huh?

Hahaha, it's cool. Just giving you a hard time.

int main()
    int a; // Stack usage increases of sizeof(a) bytes. a "allocated" on stack.
    char b; // Stack usage increases of sizeof(b) bytes. b "allocated" on stack.
    float * pf = new float[4]; // Stack usage increases of sizeof(pf) ( sizeof(float*) ).
    // pf "allocated" on stack. Points fo heap memory of sizeof(*pf)*4.
    delete[] pf; // pf points to heap memory freed.

    return 0;// <- pf "deallocated" from stack, stack usage decreases ...
    // b "deallocated" etc etc
    // a "deallocated" etc etc


Hope this is useful.
Last edited on
You're forgetting alignment.
huh... interesting. Does this also explan why "memory leaks" occur (reasoning: stack is de-allocated, but not the heap, so if pf is not 'deleted', then the stack will be freed, but not the memory?)


Also, in a recursive directory algorithm i wrote (because "ACCESS DENIED"... just couldnt be helped so i had to make it skip those), I basically provided all of the variables to the function by address, so nothing new was created, and so that there was no "limit" as to how many files/folders my algorithm could take. The only problem I could not solve was this:

It gets the files in folders in a folder, then calls itself with that list and does the same thing (with each time adding these small lists to a 'main' list). I had to have it create a vector<string> to pass these temporary lists (because we can't pass all of them with every recursion). Is a stack overflow a possibility if a filesystem is large/deep enough? is this possible? and lastly: if this is a 'vulnerablility', how can I prevent it?

vector<path> function(const path& parent = def_path, vector<dir>& stuff_found = def_dirv)
    vector<dir> temp = vector<dir>(); //im worried because (the way i understand) it will create this vector upon each recursion
    temp = get_files(parent);
Last edited on
Is a stack overflow a possibility if a filesystem is large/deep enough?

Stack overflow is always a possibility with recursion, but it's EXTREMELY unlikely here unless you try to run on a very, very deep filesystem. We're talking maybe tens of thousands of nested directories (or more). I'm not even sure if most filesystems allow them to get that big.

im worried because (the way i understand) it will create this vector upon each recursion

It will. But vectors are small so it shouldn't be a concern. They typically only contain a few pointers. Do a sizeof(vector<dir>) to see for yourself. That is how many bytes get put on the stack (regardless of how many elements are in the vector).

The actual data IN the vector (ie, the values you push_back) are not stored on the stack, but are stored on the heap.
I used to, mistakenly, use a big character buffer in a recursive function that gave me a vector of paths just like you tried to do and many stack overflows occurred. Then I understood what was wrong. So yes, it's a possibility if you aren't experienced. In case you run into stack overflow, use strings instead of char arrays, they take less space on stack and can take lot more on heap.
ok. Well, that was the only concern I had, because I just could not avoid the vector, since those directories had to be separated from the rest of the list. All of the other variables are passed by address though, so that vector is literally the only thing the function creates.


Also, I have the program running. Works like a charm! All directories it can't 'get into' are logged, and the rest are all thrown into a single list. This is used to profile a "snapshot". When the user wants to take another "snapshot", he/she can choose to compare them, and it will tell the user ALL of the deleted files/folders, and all of the created files/folders. From that point, he/she can save it as a record for future reference. I love it so far! (power....)
Last edited on
Topic archived. No new replies allowed.