Scope of Heap Allocated Objects

I thought that one of the main benefits of heap-allocation was that the pointer/object has a scope of the entire program, but in the example below, the three heap allocated pointers are 'undefined' after exiting the scope. So do heap-allocated pointers/objects have a global scope or are they just local to the current scope, and if they are local to the current scope, then what are the benefits of heap allocation vs. stack allocation?

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
	
        {
		int* h_value = new int;
		*h_value = 5;		
		int* h_arr = new int[5];

		for (int i = 0; i < 5; i++)
		{
			h_arr[i] = i + 1;
		}

		Vec3* h_vec3 = new Vec3;
	}
	
	delete h_value;  // identifier "h_value" is undefined 
	delete[] h_arr;  // identifier "h_arr" is undefined
	delete[] h_vec3; // identifier "h_vec3" is undefined
Last edited on
Scope and lifetime are two different things!
https://www.csee.umbc.edu/~chang/cs202/Lectures/modules/m05-scope/slides.php?print

"Scope", in simple terms, is denoted by being surrounded by a pair of { }. You can't access a variable declared within a pair of { } from outside of it. The lifetime of an object can be automatic (on the stack), static (exists for the lifetime of the program), or dynamic (exists when made with new, no longer exists once deleted).

The pointers that you declared on lines 2, 4, and 12 are all still just data on the stack (automatic lifetime). They point to data with a dynamic lifetime.

If you use dynamic allocations, you must always keep a handle that points to the dynamic memory, so that you can later delete it. Otherwise, if the handle (pointer) is lost, you have a memory leak.

There's no need for you to use dynamic allocation for this example, but if you still wish to, then do something like this:
1
2
3
4
5
6
7
8
9
10
11
int* h_value;
int* h_arr;
Vec3* h_vec3;
{
    h_value = new int;
    h_arr = new int[5];
    h_vec3 = new Vec3;
}
delete h_value;
delete[] h_arr;
delete h_vec3; // notice: not delete[] 
Last edited on
int* h_value = new int;
With this line, TWO objects are created.

One of them is an int, on the heap.
The other, named h_value , is an int-pointer, on the stack.

When the scope ends, the one on the heap will continue to exist.
The one on the stack will not.

delete h_value; // identifier "h_value" is undefined
Because h_value was on the stack, and it went out of scope, so no longer exists. What it was pointing to still exists, on the heap.

When you create objects on the heap, do NOT lose the pointer to them.
Last edited on
Because h_value was on the stack, and it went out of scope, so no longer exists. What it was pointing to still exists, on the heap.

So in this case, does delete actually need to be called at all? I thought it was necessary to always call delete if new was previously called.

So in this case, does delete actually need to be called at all? I thought it was necessary to always call delete if new was previously called.


The object on the heap is still there. Still taking up memory. If you do this enough, creating objects on the heap and never deleting them, you'll run out of memory and your program will crash.

Losing the pointer to something on the heap and never being able to delete it is a called a "memory leak". It's a bad thing.
Last edited on
Yes, you absolutely want to call delete for every new, or you'll have memory leaks.

When you call:
int* h_arr = new int[5];, two things are created, like Repeater said.


                             new int[5]:
                          +--------+---------+--------+-------+--------+
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
+-----------+             |        |         |        |       |        |
|           |             |        |         |        |       |        |
|  h_value  +------------>+        |         |        |       |        |
|           |             |        |         |        |       |        |
+-----------+             |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          +--------+---------+--------+-------+--------+



On line 13, h_value (the pointer itself, not the data it points to) goes out of scope.
What you remain with is:


                             new int[5]:
                          +--------+---------+--------+-------+--------+
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          |        |         |        |       |        |
                          +--------+---------+--------+-------+--------+


But now you have no handle to the data you allocated -- memory leak.
Last edited on
Okay so in my example, the pointer is stack allocated but its contents are heap allocated. If the pointer goes out of scope, then I have a memory leak because I have lost the pointer and therefore have no way of referencing what the pointer was pointing to.

So this may not be the kind of question that can be answered briefly, but I'll ask it anyway... what are the actual benefits of heap allocation? Why don't we just allocate everything on the stack seeing as though the pointer which contains the data is stack allocated and will therefore vanish once it is out of scope?
Last edited on
The stack is small. Typically a few megabytes. If you want a large object, it's going on the heap.
the advantages of using the heap are really only 2 that I can think of.
1) the stack is significantly smaller. At some point, you MUST use the heap, if you want your huge wad of data to be in memory.
2) it is rather challenging to do chain data structures (graph/tree/linked list/ etc) with the stack. Its not impossible, but its going to be messy.

as others said you have confused the pointer and the memory.
the pointer follows normal scope rules and can be 'lost' by going out of scope. The memory is allocated with the new statement via the operating system, and is marked as owned by your program for the execution lifespan of your program (most OS recover the memory after the program and its threads all terminate, some embedded or antique OS like dos may not). Your program still owns the memory, even if you threw away its address (by going out of scope or overwriting it or whatever).

Im old school, so my solution to working with pointers is to tell you to learn to code. Its sort of a 3 tiered approach: don't use pointers unless you really need them (we need less of this memory allocation stuff due to containers that manage it for us like vector, etc). If you do need them, we have smarter pointers that offer SOME (but not absolute) protections. But the third tier is the key... practice, learn how it works, and learn how to use them safely … nothing is better than this; consider C programmers who don't have the first 2 tiers, they get by fine using pointers, by understanding and practice...
Stack (at least used to) is very small compared to the heap.

Size of stack allocations is static constant that must be known at compile-time. Heap allows dynamic size.


There are std::string, std::vector, std::unique_ptr, std::shared_ptr, etc.
These objects have "a pointer" and when they do die, they deallocate the heap memory that they point to.
One example is if you have a container of unknown size of polymorphic object pointers. Pointers or references are needed for polymorphism to work, and you can't store references in an array. If, at some point during the program, you create another polymorphic object within a limited scope, you can add it to your array by doing something like,
1
2
3
DerivedClass obj;
BaseClass* p = &obj; 
arr[i] = p;
, but once obj goes out of scope, you now have a junk pointer (you're now pointing to junk). It's really hard to do something like this without using dynamic lifetimes.

Another example is when you have a vector of some really big object type. Even if the stack had unlimited space, if the vector needs to re-allocate after you push_back, then every single object needs to be copied over. If you just have a vector of pointers, then the cost of doing this is relatively minor. But now you're dealing with pointers, and you run into the same issues as the last example.

Note: Using raw "new" and "delete" is usually discouraged in C++. The standard library provides "smart" pointers (unique_ptr and shared_ptr) that help manage this.


keskiverto wrote:
Size of stack allocations is static constant that must be known at compile-time
Can you explain this more? I'm not sure I understand. Certainly there is a static limit to the amount of stack allocations there can be (else you get a stack overflow), but the amount of stack allocations must still be partially dynamic, or else you couldn't do things like conditional recursion.
1
2
3
4
5
6
7
void recurse()
{
    int a;
    std::cin >> a;
    if (a == 42)
        recurse();
}

Edit: Of course, the size needed for each function push call is still known at compile-time, regardless of the number of invocations. I suppose this is what is meant. Guess I answered my own question.
Last edited on
Okay that's plenty for me to digest. Thanks for everyone's input.
what are the actual benefits of heap allocation?
Well, a heap alloctated memory may exists longer than the encompassing function. A local variable does not. Hence somethimes it is inevitable to use dynamic allocated objects.

Instead of raw pointers you better use smart pointer:

https://en.cppreference.com/w/cpp/memory
heap alloctated memory may exists longer than the encompassing function. A local variable does not. Hence somethimes it is inevitable to use dynamic allocated objects.

It seems slightly misleading to say that heap allocated memory lasts longer than the scope that it is created within.

Whilst from my understanding, it is strictly true that heap allocated memory will persist past the scope it was created within, the reality is that the underlying pointer to the heap allocated memory won't, which at least to my current understanding, renders the heap allocated object redundant, since it can no longer be referenced or used in any way.

I am just getting used to the concept of heap allocation so feel free to correct me if there is something that I have overlooked, but it seems to me that extending the lifetime of an object beyond the scope it was created within via heap allocation is actually a drawback rather than a benefit, and requires some cleaning up as well that can lead to further complications.

NOTE: This is not to say that there aren't other benefits to heap allocation, just that the aforementioned 'lifetime extension' is perhaps not as simple and beneficial as it sounds.
Last edited on
calioranged wrote:
I am just getting used to the concept of heap allocation so feel free to correct me if there is something that I have overlooked, but it seems to me that extending the lifetime of an object beyond the scope it was created within via heap allocation is actually a drawback rather than a benefit, and requires some cleaning up as well that can lead to further complications.

You can have several pointers to the same heap allocated object. So if one pointer goes out of scope, there could be other pointers still pointing to the memory thinking it's valid. So if the memory were to be cleaned, the program would malfunction.
but it seems to me that extending the lifetime of an object beyond the scope it was created within via heap allocation is actually a drawback rather than a benefit
Correct. Heap allocation should be avoided and should only be done when there is no other way. When using the existing container like vector you can greatly avoid allocating memory.
Correct. Heap allocation should be avoided and should only be done when there is no other way.

Thanks for confirming that.
Grime wrote:
You can have several pointers to the same heap allocated object. So if one pointer goes out of scope, there could be other pointers still pointing to the memory thinking it's valid.

Is this the kind of situation to which you refer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
int* CreateandFillArray()
{
	int array[5];

	std::cout << "array created at " << std::addressof(array) << std::endl;

	for (int i = 0; i < 5; i++)
		array[i] = i + 1;

	return(array);
}

int main()
{
	int* arr = CreateandFillArray();

	std::cout << "array returned to " << std::addressof(arr) << std::endl;

	std::cout << std::endl;

	for (int i = 0; i < 5; i++)
		std::cout << arr[i] << std::endl;

}

//----------
Output:
19907537
13630212
264206639
264485856
264485856
//----------

The array was created on the stack and is therefore destroyed when exiting the 'CreateandFillArray()' function, leaving junk values in the returned pointer 'arr'.

Whereas if the array was heap allocated:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
int* CreateandFillArray()
{
	int* array = new int[5];

	std::cout << "array created at " << std::addressof(array) << std::endl;

	for (int i = 0; i < 5; i++)
		array[i] = i + 1;

	return(array);
}

int main()
{
	int* arr = CreateandFillArray();

	std::cout << "array returned to " << std::addressof(arr) << std::endl;
	
	std::cout << std::endl;
	
	for (int i = 0; i < 5; i++)
		std::cout << arr[i] << std::endl;

	delete[] arr; 
}

//----------
Output:
1
2
3
4
5
//----------

The memory is retained after exiting the 'CreateandFillArray()' function and is returned to a new pointer, where the data can now be referenced/used from the main function.

--------------------

Seems to be a fair example of where extension of lifetime via heap allocation would actually reap some benefits
Last edited on
Yes, but don't do this in modern C++. Just return an std::vector.
Heap allocation itself isn't that bad, what's bad is managing it yourself when you don't have to (i.e. calling new/delete yourself).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// Example program
#include <iostream>
#include <vector>

std::vector<int> CreateandFillArray()
{
	std::vector<int> array(5);

	std::cout << "array created at " << std::addressof(array) << std::endl;

	for (int i = 0; i < 5; i++)
		array[i] = i + 1;

	return array;
}

int main()
{
	std::vector<int> arr = CreateandFillArray();

	std::cout << "array returned to " << std::addressof(arr) << std::endl;
	
	std::cout << std::endl;
	
	for (int i = 0; i < 5; i++)
		std::cout << arr[i] << std::endl;
}
Last edited on
Ganado says
Yes, but don't do this in modern C++. Just return an std::vector.
Heap allocation itself isn't that bad, what's bad is managing it yourself when you don't have to (i.e. calling new/delete yourself).

Okay thanks again.
Topic archived. No new replies allowed.