For every "new" must be a "delete"?

At least I thought so.

But the professor at my college insists that if I allocate memory to a pointer:

my_class * object = new my_class;

And then stop pointing to that allocated memory:

object = NULL;

or

object = any other pointer

Then c++ will automatically free the "new" allocated above.

I said that I doubted this was the case, that for every new must be a delete somewhere.

But then he said that in the more recent versions of c++ 14 and up it is not needed...

I'm not sure how I could even test it, since I won't have access to memory to see if it's still allocated...

Maybe someone here will clarify that for me.

Thanks in advance!

Raw pointer like that still require a delete. Otherwise you have a memory leak.

In the more recent version of c++ there are smartpointer. They are designed to free the memory automatically.

See:
http://www.cplusplus.com/reference/memory/unique_ptr/
http://www.cplusplus.com/reference/memory/shared_ptr/
etc.

You teacher seems to have only a faint idea of c++...
Sounds like he's confused.

Before C++11, there were just pointers, and new and delete. If you created an object with new, you must also call delete on it, through the pointer.

After C++11, there were smart pointers. unique_ptr and shared_ptr. If you use these correctly, you don't need to call delete; when the smart pointer goes out of scope (or you reset it or reassign it or some other such), it will handle that deleting for you.

If you're using a plain pointer, you must still call delete yourself, for every new.

If you're using a smart pointer, and you're using it correctly, you use new to create the object and give it to the smart pointer, and the smart pointer takes care of the delete for you.

You can also use make_unique and make_shared instead of new, but that's an extra topic in this discussion.

The key point here is that you are right, and your instructor is confused and mistaken, mixing up different standards of C++. If you wish to correct your instructor, be careful how you do it. Find a way for your instructor to keep face. It's really not worth the bother otherwise.


I'm not sure how I could even test it, since I won't have access to memory to see if it's still allocated...


Some people call it computer science. I see very little science in it, but what separates science from philosophy is the ability to gather experimental evidence. Like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#include <iostream>

class testClass
{
  public:
  testClass() { std::cout << "I'm being constructed!" << '\n';}
 ~testClass() {std::cout << "I'm being destroyed!" << '\n';}
};

int main()
{
   std::cout << "Automatic variable in a small scope:" << '\n';
  {
    testClass one;
  }

  std::cout << "Manual new and delete:" << '\n';

  testClass* pointer_one = new testClass;
  delete pointer_one;


  std::cout << "Manual new but no delete, just assign NULL:" << '\n';

  testClass* pointer_two = new testClass;
  pointer_two = NULL;
}


Output:
Automatic variable in a small scope:
I'm being constructed!
I'm being destroyed!
Manual new and delete:
I'm being constructed!
I'm being destroyed!
Manual new but no delete, just assign NULL:
I'm being constructed!


The last one is never destructed. Its destructor does not run. Nothing ever called delete on it. It's a memory leak.
Last edited on
C++ does not automatically free allocations. If we want something to happen, then we have to write it.

STD containers and smart pointers do manage the memory that they "own", i.e. their destructors (if called) do deallocate the owned memory. Raw pointers do not.

Here is "modern C++" example:
1
2
3
4
5
6
7
8
9
#include <iostream>
#include <memory>

int main ()
{
  auto foo = std::make_shared<int> (42);
  std::cout << "*foo: " << *foo << '\n';
  return 0;
}

There exactly one call of new int and one corresponding call of delete in that example program.
The new is called within the function make_shared,
and the delete is called by the destructor of shared_ptr<int> object foo.
Wow. Thanks for the fast in-depth responses!

"The key point here is that you are right, and your instructor is confused and mistaken, mixing up different standards of C++. If you wish to correct your instructor, be careful how you do it. Find a way for your instructor to keep face. It's really not worth the bother otherwise."

He's very smart and knows a lot, he pushes us a lot so we learn more on our own, but I guess he got confused on this one, it happens...

Just to clarify:

In this case we were programming a custom List with template from scratch:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
#pragma once
#include "Node.h"

template <class T>
class List {

private:
	Node<T> *first, *last, *i;	
	int size;

public:
	List::List();
	void Insert(T* data);
	void Remove(T* data);
	void Show();
	void Sort();
	T* getElement(int index);


};

template <class T>
void List<T>::Insert(T data) {

	Node<T> *new_node = new Node<T>; // Needs to be a pointer so that it doesn't vanish after the scope;
	new_node->setElemento(data);

	if (first == NULL) {
		first = new_node;
		last = first;
		i = first;
		size++;
	}
	else {
		i = last;
		last->next = new_node;
		last = new_node;
		last->previous = i;
		size++;
	}

}

template <class T>
void List<T>::Remove(T data) {

	if (first != NULL) {

		i = first;

		for (int k = 0; k < size; k++) {
			if (data == i->getElemento()) {

				if (i == first) {
					i = first;
					first = first->next;
					first->previous = NULL;
					delete i;
					size--;
					break;
				}
				else if (i == last) {
					i = last;
					last = last->previous;
					last->next = NULL;
					delete i;
					size--;
					break;
				}
				else { // to "remove" the selected node, make it's next and previous nodes point to each other.
					i->next->previous = i->previous;
					i->previous->next = i->next;
					delete i;
					size--;
					break;
				}

			}
			else {
				i = i->next;
			}
			
		}
	}

}


So I must call delete in my Remove() method. Stop pointing to the Node to be removed won't de-allocate it automatically like he said...
Last edited on
You can prove it in your own code by having each node log out its construction and destruction.
Ok, I will try that!
Oh dear, the only thing worse than a professor teaching outdated ("C-style") C++ is a professor teaching incorrect C++.

Maybe some compilers will be able to detect the leak, and not leak memory, but it certainly isn't part of the standard.

Repeater is exactly right, you could anonymously email your professor and show a clear example, with an online link to a C++14 compiler (ideone for example). This will prove it to yourself as well, instead of taking our word for it. (Or, print out a simple program with output and put it under his door.)
not unless you use the new pointer classes, it won't.

On the professor thing, I wouldnt bring it up in class.
1) just code it up with the new pointer classes and comment them appropriately like
//this is the 'new' c++ smart pointer class that destroys itself when out of scope
if he cares, he will ask you about it or look it up himself and learn it.

or
2) have a nice chat one on one after class ... Hey, look what I learned about this stuff, does this seem correct to you?

pointers are less used today, but knowing them is critical to read old code and its a bit advanced to ask a student to write a linked list without one (its doable, but a bit much to bite off first time around).

closed account (E0p9LyTq)
Before C++11, there were just pointers, and new and delete.

Not entirely true. Before C++11 there was auto_ptr. It was deprecated in C++11 when other smart pointers were introduced and removed in C++17.

auto_ptr and unique_ptr act much the same with assigning, they transfer ownership.
I throw in one more thing.
Its bad practice, and not good coding, but you can skip the deletes without harm on most modern OS for short duration, small programs. The OS will clean up after your bad behavior when your program exits. This was not the case way, way back when, and eventually the OS would run out of memory that had been assigned out to dead processes, and the fix was a reboot which could be a bit annoying. So in that sense, while it is a horrible practice, the memory will be deleted for you (this of course isn't assured but I don't know of any modern OS that lacks this self defense cleanup).

I bring it up for one reason: if you check your memory before you run a program, run your test program and allocate enough memory to notice (say, 1GB or something), check the memory, let the program exit, and check again, it will not be missing that 1gb in the final check. This could lead one to conclude that the pointers self-cleaned properly, but that isnt really what happened, it was OS self defense routines and not the program that cleaned it up. Don't be fooled by this when looking for memory leaks etc.



Of course, there's a difference between the memory being released, and the resource being released. If the program doesn't delete its pointers the OS may release the memory, but there's no telling if the resources were released. For example:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class PersistentResource{
public:
    PersistentResource(){
        if (file_exists())
            throw std::runtime_error("Only one instance at a time!");
        create_file();
    }
    ~PersistentResource(){
        delete_file();
    }
};

int main(){
    new PersistentResource();
    return 0;
}
On any modern OS, this program doesn't leak memory, but it does leak its resource, because its destructor never runs.
I throw in one more thing.
Its bad practice, and not good coding, but you can skip the deletes without harm on most modern OS for short duration, small programs. The OS will clean up after your bad behavior when your program exits. This was not the case way, way back when, and eventually the OS would run out of memory that had been assigned out to dead processes, and the fix was a reboot which could be a bit annoying. So in that sense, while it is a horrible practice, the memory will be deleted for you (this of course isn't assured but I don't know of any modern OS that lacks this self defense cleanup).

I bring it up for one reason: if you check your memory before you run a program, run your test program and allocate enough memory to notice (say, 1GB or something), check the memory, let the program exit, and check again, it will not be missing that 1gb in the final check. This could lead one to conclude that the pointers self-cleaned properly, but that isnt really what happened, it was OS self defense routines and not the program that cleaned it up. Don't be fooled by this when looking for memory leaks etc.


On any modern OS, this program doesn't leak memory, but it does leak its resource, because its destructor never runs.


Exactly.

We learn to develop games in our classes(small ones but still...) so we can't wait for the OS.

We need to free resources as soon as logically possible.

You can prove it in your own code by having each node log out its construction and destruction.


It worked as you said:

Assigning NULL to the pointer "node" didn't call any Node destructors...

Only " delete node; " did destroy the instance.

I will talk to the guy and try to prove my point - with the application to back it up ofc -, he is a reasonable guy and we get along a bit...

I've come up with other doubts before... He was always humble(if a bit stubborn) but never got offended.

All you guys were really great!

Thanks a lot for the answers, cheers!

Last edited on
Gordyne wrote:
I'm not sure how I could even test it

Beside the already-suggested noisy destructors in your classes, If you can access a machine with Linux, you can use valgrind (a memory leak checker tool)

Here's a program based on the first message in this thread:
1
2
3
4
5
6
7
~/test$ cat test.cc
#include <cstddef>
int main() {
    struct my_class {};
    my_class * object = new my_class;
    object = NULL;
}

compile:
~/test$ g++ -g -o test test.cc

check for leaks:
~/test$ valgrind --leak-check=full ./test
==26777== Memcheck, a memory error detector
==26777== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==26777== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==26777== Command: ./test
==26777==
==26777==
==26777== HEAP SUMMARY:
==26777==     in use at exit: 1 bytes in 1 blocks
==26777==   total heap usage: 1 allocs, 0 frees, 1 bytes allocated
==26777==
==26777== 1 bytes in 1 blocks are definitely lost in loss record 1 of 1
==26777==    at 0x4C28148: operator new(unsigned long) (vg_replace_malloc.c:333)
==26777==    by 0x4005B8: main (test.cc:4)
==26777==
==26777== LEAK SUMMARY:
==26777==    definitely lost: 1 bytes in 1 blocks
==26777==    indirectly lost: 0 bytes in 0 blocks
==26777==      possibly lost: 0 bytes in 0 blocks
==26777==    still reachable: 0 bytes in 0 blocks
==26777==         suppressed: 0 bytes in 0 blocks
==26777==
==26777== For counts of detected and suppressed errors, rerun with: -v
==26777== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 5 from 5)


this is telling me that the program had a memory leak of 1 byte (the "definitely lost" line), because there never was a matching delete for the new at line 4
Last edited on
Hypothesis:
if I allocate memory to a pointer:
my_class * object = new my_class;
And then stop pointing to that allocated memory:
object = NULL;
or
object = any other pointer
Then c++ will automatically free the "new" allocated above.

If the hypothesis is true, then a program:
1
2
3
4
5
6
7
8
9
10
11
void fubar() {
  size_t N = something large
  double* p = new double [N]; // barely fits
  p = nullptr; // hypothesis: this invokes "garbage collection"
}

int main() {
  for ( /*many times*/ ) {
    fubar();
  }
}

should "work okay".
On every call of fubar it will allocate a block of memory that is large, but not too much for the PC to run out of memory.

According to the hypothesis, the allocated memory is automatically deallocated, and therefore our program should be able to call the fubar() as many times as it pleases, without ill effects.


If the hypothesis is wrong, then a test run will demostrate how the OS copes with out-of-memory situation. That can be harsh.
That particular program may never run out of memory, actually. Some OSs (such as Linux) do not commit a page until it's been written.
Topic archived. No new replies allowed.