Dynamic Memory

Hello,

I want to ask :

1. Why do we need to delete a pointer when we have use new pointer??
//I have ever read because it causes memory leak, but I dont understand it.If it cause memory leak, can somebody explain it..
AND
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#include<iostream>
#define N '\n'
using namespace std;

int main()
{
    int *p = new (nothrow) int(0);
    int *arr = new (nothrow) int[5] ;
    for(int i = 0;i < 5;i++)
        cin >> *(arr+i);
    system("CLS");
    if(p == 0)
    {

        cout << "Error : p memory cant be allocated" <<  N;
        for(int i = 0;i < 5;++i)
        {
            cout <<  "arr[" << i << "] = " << *(arr+i) << N;
        }
    }
else
    if(arr == 0)
    {
        cout << "Error : arr memory cant be allocated";
        cout << "p = " << *p;
    }
else
    {
        cout << "p = " << *p << N;
        for(int i = 0;i < 5;++i)
        {
            cout <<  "arr[" << i << "] = " << *(arr+i) << N;
        }
    }
}

I can run it without deleting the new pointer.

2. I found this program on another website
1
2
3
4
5
int *p = new(nothrow) int;
if (!p)
{
   cout << "Memory allocation failed\n";
}

what is the meaning of (!p)??(in the if operation)

3. What makes dynamic memory different from usual pointer??
Is it to save the memory or what??
and can dynamic memory used not in pointer, but usual variable or something else??
Last edited on
The study you're asking about is something you'll have to read in books on C/C++, as it is too large to post here.

However, the basic notion is this:

C (and that portion of C that is within C++) was originally design to be an assembler, a language very close to the CPU's native way of operating. As a result the language deals with raw machine concepts like raw control of memory, and pointers are the means of doing that. Many other languages are higher level and remove the notion of pointers from a programmer's viewpoint, basically ignoring the underlying machine to some extent in order to simplify programming. There is a cost to that in performance. Working at a level very close to the CPU's native means of operation gives high performance, but the cost is complexity and responsibility of the programmer to keep things clear and safe.

In your first code example above you test if p == 0 (and arr == 0 ), and print a message that p or arr could not be allocated. However, by the time the execution reaches that test, there would already have been a crash if arr could not be allocated, because it was already used.

The "!" operator is called "not". It inverts a logical test. If "if ( a )" evaluates to "true" because "a" is true, then "if ( !a )" evaluates to false because "a" is true, and the "!" inverts that. It inverts false to true, or true to false.

In question 3, you've confused pointers with dynamic memory. They are related but separate subjects. There is no way to use dynamic memory (allocating memory) without using a pointer to store the result without using genuinely strange and confusing techniques which usually have little or no value.

Pointers are used to store the location of allocated memory, or to sequence through that memory, or to index a location in that memory.

When you use the word "new", you are taking the wheel and allocating memory yourself. Therefore, you also have to be responsible for when your program is done using that memory and give it back to the operating system via "delete". Unlike a normal variable, memory allocated in this way (dynamically) will NOT be automatically given back to the operating system when it goes out of scope:

1
2
3
4
{
    int *arr = new (nothrow) int[5] ;
    //Goes out of scope here... But memory was never given back to the operating system!
}


^Unless that is when the program terminates (in which case the Operating System will clean up), that memory is now lost for the duration of the program. The Operating System wont take it back because it thinks the program is using it, and the program no longer has access to the memory.

However, you can instead use a "smart" pointer and do something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
int main()
{
	std::unique_ptr<int[]> pp = std::make_unique<int[]>(20);


	for (int i = 0; i < 20; i++)
		pp[i] = i + 10;

	std::cout << "Input elements are :\n";
	for (int i = 0; i < 20; i++)
		std::cout << pp[i] << std::endl;

        //No Need To Delete "pp", it'll handle that itself
}



Your 3rd question is slightly different. Dynamic memory is done with a pointer. So the pointer is your only way to access that memory. The way to give the memory back is by using "delete"/"delete[]". However, a pointer like this:

1
2
int x = 10;
int *p = &x;


^Doesn't require a "delete" because the pointer isn't what's handling that memory. Once "x" goes out of scope, the memory is given back to the operating system. However, it can become a dangling pointer in a situation like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
int main()
{
	int *p;
	if (true)
	{
		int x = 10;
		p = &x;
	} //x is out of scope here

	std::cout << *p; //Not pointing at anything Valid!

        //p = NULL; - How to Deal With Dangling Pointers So They're Not Usable

}


^For some reason though, it seems to work just Fine on Visual Studio 🤷‍♀️
Last edited on
@zapshe

^For some reason though, it seems to work just Fine on Visual Studio 🤷‍♀️


It may seem to work on a wide variety of compilers and platforms.

A lot may depend on how the compiler optimizes the code, if optimization is on, and some may be the behavior of the compiler interpreting this code.

There isn't really a requirement that int x be placed on a new level of the stack. It could, certainly, and in other contexts where code is more complex before and/or after this clause, but there may not actually be a stack manipulation here, meaning x still actually exists as if it had been declared before the "if" clause.

Of course there's no guarantee of this, as there is an implication of a stack stored local variable.

Even if the "if" clause were inside a function called by main, and if the pointer p had been assigned to this stack resource, where we also know there is an implication that a local stack was created for the function call and released after it returned, this code would likely still seem to work because little has happened to that point which corrupted the content left behind from the function call. There will be a call to the functions for "std::cout", but at the moment the content of p is dereferenced that has not happened, and it is likely that this would still seem to work.

Then, if one other function were called between the assignment of p (and the release of x) before the "cout<<" was called, THAT could corrupt the stack and THEN we'd have proof this failed.

Valgrind or similar memory debugging software would probably notice this, as would many code analysis tools and some compiler's warnings might mention it.

@Niccolo

I tried several things to make it fail, but it never does. Functions calls in between, function call to make it print, function call before the line to print it within the function, etc..

I can even reassign the value like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
int *nothing()
{
	int x = 10;

	return &x;
}

void murder(int *p)
{
	int size = 0;
	std::cout << "\n\nInput Size: ";
	std::cin >> size;
	*p = size;
	std::cout << "\n" << *p << '\n';
}

int main()
{
	int *p;
	p = nothing();

	std::cout << *p;

	nothing();

	std::cout << '\n' << *p;

	murder(p);
}


That's some hardcore impossible optimization. The pointer isn't looking at valid memory, so maybe due to optimization it'll let you cout the value before it's actually referenced. But at this point, I've outputted it, did a function call, outputted it, changed the value with a user inputted value, then outputted it again (which reflects the changed value).

Maybe it's possible Visual Studio doesn't even give back the memory if it sees it'll create a dangling pointer.


EDIT: Adding "delete p" before trying to output it is the only thing that'll make the program crash. I suppose my guess was right, it's not letting the memory go. Seems like there might be a memory leak with dangling pointers..?

Making p = NULL also does the trick, but then it's impossible to know whether or not there's actually a memory leak.
Last edited on
The form your post takes happens to work out, and not due to optimization but because the stack manipulations happen to be compatible due to the stack layout. You can actually see it if you open debugging windows that show the stack (not just the call stack - function names - but the stack in a Hex view).

I did get a warning that the stack was corrupted in VS 2019 debug.

That said, this illustrates what I'm talking about.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
int *nothing()
{
	int x = 10;
	return &x;
}

void murder(int *p)
{
	int size = 0;
	std::cout << "\n" << *p << '\n';
}

int main()
{
	int *p;
	p = nothing();
	std::cout << *p;
	murder(p);
	std::cout << '\n' << *p;

 return 0;
}


Here you'll witness that *p ends up being changed to zero even though the code that does that is at "murder" where int size=0 is executed (that happens to sit at the same position on the stack once occupied by x when "nothing" executed).

However, that only happens in the debug mode.

In the optimized mode it seems to work correctly.

There's no mystery about that, really, when you look under the hood.

The optimized version doesn't make a call to a function "nothing" - it merely sets aside stack space (plus some 'safe' zone of operation for various parameters expected for "cout" calls coming up) and as a result nothing happens to release 'x' from storage.

It is important to realize that none of this experimental observation should even HINT or suggest that this SHOULD work, it is most definitely the kind of thing we know causes crashes.

Ah I see, that makes sense now. Thanks for the insight!
Topic archived. No new replies allowed.