c++ not yet implemented possible way to shrink array with delete operator

We made some questions about this topic to the kindly community of http://www.stackoverflow.com

After we read all their answers, we are making this question, which is better described here:

http://ncomputers.org/content/code.php?src=suggestions/shrink%20arrays%20cpp

We know that it is not yet allowed, but is this suggestion theoretically possible to be implemented one day on the C++ Standard Programming Language?

We can't use the std::vector, std::map and std::lists implementations.

std::realloc doesn't call the destructor automatically.

If it is theoretically possible, we'll consider it and write lines similar to it, which should remain commented until something similar is possible.

We can't give more details about our code.

Thank you very much for your answers.

1
2
3
4
5
6
7
8
int main(){
#define START 1000000000
#define END 10
  unsigned int * array=new unsigned int[START];
  delete[START-END]array;//Our suggestion to shrink the array
  delete[]array;//Delete the array
  return 0;
}
Last edited on
It is only possible if the operating system supports it. You would need to write your own versions of the standard library memory allocation functions to take into account the possibility of shrunken regions. It would be horribly inefficient.
We can't use the std::vector

But that's where that functionality is implemented in the C++ programming language.
Thank you very much for your time and your answers!

Best regards,

http://ncomputers.org
Last edited on
Cubbi:

std::vector::shrink_to_fit is slower, because it doesn't make inplace shrinking.

http://ncomputers.org/content/code.php?src=examples/shrink%20arrays%201.cpp

LB:

Where do you suggest me to start to read to achieve my own version of the standard library memory allocation functions?

Thank you very much for your time and your answers.
Last edited on
In your example:
1
2
3
4
5
6
7
8
int main(){
    int*a=new int[100];
    //Suggested syntax:
    delete[90]a;
    //hereinafter 'a' contains only 10 ints
    delete[]a;
    return 0;
}

After line 4, which 10 items are still in a? If it's the first 10 items then I don't see why vector::shrink_to_fit() vector::resize() won't work. I don't see anything that requires that these move the data.
@dhayden: the total allocated memory must change.
ncomputersorg wrote:
Where do you suggest me to start to read to achieve my own version of the standard library memory allocation functions?
You should start by writing your own operating system. It will be easier.
1) In-plave srinking is not that useful because there is a great chance that freed memory would not be used because of memory fragmentation.
2) You can make your own vector implementation/specialization which would work hand-to-hand with pool allocator and do in-place reallocation.
Actually if you use standard vector specialization with pool allocator, you will get more speed than using arrays and realloc.
3) If your program is designed properly you should not have memory allocation speed as bottleneck, making increasing speed of allocation irrelevant.
LB:

Thank you very much for your answer.

You should start by writing your own operating system. It will be easier.

No! We want to avoid that. Someone is guiding me. I am starting to read the ABI of Itanium. It would be only necessary to change some implementations of realloc on the standard system library and make some changes inside compiler internals.

https://gcc.gnu.org/onlinedocs/gccint/
http://mentorembedded.github.io/cxx-abi/abi.html

MiiNiPaa:

Thank you very much for your answer:

1) In-plave srinking is not that useful because there is a great chance that freed memory would not be used because of memory fragmentation.

I've thought already about the fragmentation. It is better to have some fragmented available memory, than to waste memory. The goal of this is to avoid the waste of memory, despite fragmentation.

2) You can make your own vector implementation/specialization which would work hand-to-hand with pool allocator and do in-place reallocation.
Actually if you use standard vector specialization with pool allocator, you will get more speed than using arrays and realloc.
3) If your program is designed properly you should not have memory allocation speed as bottleneck, making increasing speed of allocation irrelevant.

I can't answer this now, because I haven't tried it yet.

When you talk about the pool allocator, are you talking about std::allocator?

So far I researched it use ::operator new(size_t) to allocate memory :(
Last edited on
You are supposed to write your own allocator.

The goal of this is to avoid the waste of memory
At the cost of speed. If you would care about every single byte, you will slow your program to a crawl. Memory is largely irrelevant. You just free it when you done with it. Why really release memory? Reuse it. When your array will need more elements, add them in that unused space.

To make proposal, you need to prove that problem you are trying to solve exist. That means real code examples and proof that it really a problem. Then you must prove that it cannot be solved without very extensive changes. After that your idea might have a chance.
dhayden:

Thank you very much for your answer!

After line 4, which 10 items are still in a? If it's the first 10 items then I don't see why vector::shrink_to_fit() vector::resize() won't work. I don't see anything that requires that these move the data.

Yes, they are the first ten items. Yes std::vector::shrink_to_fit works, the difference is that it moves the block. We suggest in-place shrinking to obtain a better performance, despite some memory fragmentation.

I invite you to read (quick if you want) this paper:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3495.htm

And this draft:

http://ncomputers.org/content/code.php?src=examples/shrink%20arrays%201.cpp
Last edited on
MiiNiPaa:

Thank you very much for your answer.

You are supposed to write your own allocator.

We have almost achieved it overloading the delete[] operator and only have this complication:

http://ncomputers.org/content/code.php?src=examples/shrink%20arrays%205.cpp

You just free it when you done with it.

Exactly.

Why really release memory? Reuse it. When your array will need more elements, add them in that unused space.

To make proposal, you need to prove that problem you are trying to solve exist. That means real code examples and proof that it really a problem. Then you must prove that it cannot be solved without very extensive changes. After that your idea might have a chance.

I invite you to read (quick if you want) this draft:

http://ncomputers.org/content/code.php?src=examples/shrink%20arrays%204.cpp
Actually, why not use deque? Its shrink_to_fit does not actually require to move any elements (if concrete implementation does that for some of them is another story). It grows without moving its elements. It has O(1) random access.

Sadly it seems that your site is not opening for me. I will try again tomorrow.
Last edited on
MiiNiPaa:

Thank you very much for your answer!

You teach me a lot!

I tested deque it worked! It didn't move the block!

I was very excited about deque!

Until I read:

deques are not guaranteed to store all its elements in contiguous storage locations: accessing elements in a deque by offsetting a pointer to another element causes undefined behavior.

http://www.cplusplus.com/reference/deque/deque/

One of the restrictions of our code, is that it requires to have all objects, (on this case colors) sequentially aligned.

I confess you something: Our proposal, which I wish you can open one day, isn't really our proposal. It was an idea, I started to post this idea here on this forum and obtained a lot of feedback and ideas that made my idea to have enough arguments to be a proposal. This proposal is possible thanks to all of your answers, comments and feedback. I think that we've already researched enough to conclude that it is necessary for some cases.

There are other cases where vector and deque are the best option, that's why they exist. I've also used them or similiar implementations (like hashSet, HashMap, etc. in Java) for other applications, but for heuristics, where the main goal is to optimize the code to its best performance, this improvement for the delete[size] operator is required to allow the freeing of data after it was computed.

Thanks to your last answer I'll update a little our draft and improve it. It is really the result of all your feedback.

I want to thank all of you with a page, where we mention the username of each of you. It will require some time and for now it is on the queue, but I want to personally thank you MiiNiPaa, for your knowledge, your time, your patience and your help, all together contributes a lot for our project, what at the end should contribute for the benefit of all of us.
Last edited on
While allocators that shrink seem like a good idea, I can't help but think that you're misplacing the optimization here. If the data is large, then allocate it separately and use a vector of pointers to it. Does the data really need to be sequentially allocated or do you just need random access to it? Is there nothing else that you can optimize?

If it really matters, then create an allocator based on malloc/free. Then create your own MyVector template based on vector. redefine resize() and shrink_to_fit to use realloc() when the size shrinks and explicitly call the destructor on the items that should be destroyed.
There is a problem regarding realloc: if your container would use it, you cannot have strong exception guarantee anymore. Actually you cannot have anything but TriviallyCopyable types contained in container.
MiiNiPaa, that's why I said to use realloc when the size shrinks. Assuming that shrinking doesn't move the data (which the code should probably check), there would be no move. If the collection grows then you'd use malloc() and copy the data.
that's why I said to use realloc when the size shrinks
Realloc might make in-place resize or allocate new memory block. Only way to check is to look at the result.
Thank you very much for your answers!

dhayden:

I want personally thank you for all your contributions. You teach me also a lot!

Does the data really need to be sequentially allocated or do you just need random access to it?

Yes, specially with the arrays of conflicts. When a vertex color has a a neighbor with the same color, it will be placed into the next linked list. That means: Color red for vertex 0 are member of the linked list "zero conflicts", (0x001) but a neighbor of this vertex changed its color to red, then color red of vertex 0 must go to the next linked list: "one conflict" (0x002).

Is there nothing else that you can optimize?

There will always be something to optimize.

If it really matters, then create an allocator based on malloc/free. Then create your own MyVector template based on vector. redefine resize() and shrink_to_fit to use realloc() when the size shrinks and explicitly call the destructor on the items that should be destroyed.

This is a very good idea. Definetely it would be necessary to do it. Or at least something similar.

For now I am writing this code and it isn't so important to shrink the arrays to free the useless data. In the future it is porbable, that a server with ECC memory will run a service with this code. With in-place shrinkable arrays or vectors it would be able to run more threads.

MiiNiPaa, that's why I said to use realloc when the size shrinks. Assuming that shrinking doesn't move the data (which the code should probably check), there would be no move. If the collection grows then you'd use malloc() and copy the data.

Exactly.

MiiNiPaa:

Thank you very much for everything!

Realloc might make in-place resize or allocate new memory block. Only way to check is to look at the result.

Exactly.

PF, ncomputers.org
Topic archived. No new replies allowed.