My code now works, but I'm confused about the initial problem.
Lets say I have an array of pointers, that I call new for.
Next, lets say I copy one of those elements to another pointer (lets call it *copy). They now point to the same thing.
Ok, now lets say in a destructor, I delete the pointers in the array.
Here is the thing. If I don't first write "copy = nullptr", visual studio throws a somewhat cryptic exception, but the gist is a heap violation of some sort (some library code hits a breakpoint, and little else is said.)
I guess I see why doing what I describe is poor practice (my copy pointer now points to random data if I don't null it), but what if I was going to re-assign that copy to something before using it next? Sure this is messy and bad practice, but why actually throw an exception and stop execution? Isn't this sort of thing more in the purview of the analysis tool? Is there something more fundamental that I'm missing?
Your example is pretty much exactly the situation I'm describing. It ran with no hitch. The debugger seems to confirm its doing exactly as my code does.
Strange. Assuming nothing else is different (no longer a safe assumption it seems!), my code would have caused the debugger to hit a breakpoint in some .dll at the line when a is deleted. Nulling copy first would have fixed it.
I'm at a loss as to why yours works, and mine doesn't!
Your other code might have some undefined behaviour that shows up that way.
1. Prefer the Standard Library containers for storing data. They manage the memory they use.
2. Prefer the Standard Library smart pointers over raw pointers. They manage the memory they point to.
I'm at a loss as to why yours works, and mine doesn't
There must be an explanation. You need to create a minimal example of your code that complains unless you null the copied ptr. Does it involve a user-defined class? Is it pointing to std::string or std::vector, etc? Do you do anything with the copied pointer?