allocation speed difference between vectors and arrays

Pages: 12
This is a good point.

Though nothing changed when I used clock() instead, because I think nothing will steal thread of execution from high priority program running on mostly idle multicore machine, in real applications it might give you different results.
ncomputersorg wrote:
One of our codes, which we will publish soon, works faster with arrays than with vectors.

So, we are starting to write a paper with the basis of a possible futuristic functionality, which has the purpose to allow arrays allocated with the new[] operator to be shrunk using the delete[] operator.

...
ncomputersorg wrote:
You wrote some knowledge, that I didn't know.

Did I understood that correctly?

You have written a program. You have done some tests with it. The results made you conclude that the C++ standard has flaws and you will propose a change to the language to remedy them.

However, you admit that you don't know the current standard through and through. Perhaps your program and results have flaws due to that? If they do, then the conclusions can be wrong too.

It is possible that the standard committee and the compiler vendors have failed to notice the blatant optimization issue that you are about to write about, but how probable do you think such case to be?
The easiest way we know to have a runtime variable amount of vectors, which contain constructed objects, is using the new operator.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#include<iostream>
#include<time.h>
#include<vector>
using namespace std;

struct A{
	static unsigned int a;
	static unsigned int b;
	A(){a++;}
	~A(){b++;}
};

unsigned int A::a=0;
unsigned int A::b=0;

int main(){
	unsigned int c,d,S=1024*1024;
	cout<<"How many vectors do you want?";
	cin>>c;
	srand(time(0));
	for(d=0;d<c;d++){
		new vector<A>(rand()%S);
	}
	cout<<"Total objects constructed: "<<A::a<<endl;
	if(A::b)cout<<"Error! Objects destructed"<<endl;
	return 0;
}


I suppose if your goal is to leak memory and write unusable and incorrect code, that approach is the way to go.


One of our codes, which we will publish soon, works faster with arrays than with vectors.

Does anyone claim that arrays and vectors are the same speed? Maybe the same order of magnitude, but not identical speed.

But more importantly, if you're worried about speed, the difference between arrays and vectors should be one of the very last things you look at. Have you examined your algorithms? Profiled the code? This sounds like it may be premature optimization.
Thank you to all of you for your time and your comments :D

MiiNiPaa:
Your amazing code taught me something new: It is something new for me the short names for data types like unsigned int. I like it!

I should indicate better, which code contains exactly what be want to achieve: We search the better performance when shrinking an undefined number of random arrays to avoid the waste of memory when it is needed. (Note that 90% of times computers have enough memory and they can waste some gigabytes for a short time).

Shrinking of arrays and shrinking of vectors comparison:

http://ncomputers.org/content/code.php?src=examples/shrink%20arrays%201.cpp

JLBorges:
Thank you very much for your comment, you taught me something new!

keskiverto:
You can find what we are purposing to improve dynamic arrays on C++, allowing to shrink them using the delete[] operator here:

http://ncomputers.org/content/code.php?src=suggestions/shrink%20arrays%20cpp

Cire:
We want to write correct and usable code. Maybe we're not explaining exactly what we want to achieve. I ask you please to be patient with us. English is a foreign language for us. To write an answer, which describes at best our ideas and point of views, is most times a task, which requires more free time, than we have.

Dhayden:
We are suggesting standard shrinkable arrays, which could be useful on some cases.

Have you examined your algorithms? Profiled the code?

Yes: shrinkable arrays could optimize one of our newest codes, which we will publish soon.

Best regards,

http://ncomputers.org
Last edited on
This ten year old article may be of interest: http://www.drdobbs.com/policy-based-memory-allocation/184402039

Excerpt:
Today's general-purpose memory allocators have reached reasonable speed and low fragmentation for a large category of programs. However, in the memory-allocation realm, a little information can go a long way. Application-specific information about allocation patterns helps to implement specialized memory allocators that heavily improve the bottom line of many high-performance applications.
...
While general-purpose allocators have average overheads in the hundreds of cycles, a good customized memory allocator can require as few as half a dozen cycles.

That's why many high-profile, high-performance applications (GCC, Apache, and Microsoft's SQL Server to name just a few) implement their own memory allocator.


That is also why, every competent C++ software house invests in implementing a variety of standard library compatible allocators.
Dear JLBorges:

Thank you very much for your answer and your time!

I started to read the article you posted, I searched the word "shrink" and I didn't find it.

We know that some high-performance applications and compilers have their own memory allocator.

I didn't know, that GCC, Apache and Microsoft SQL have it.

We think, that it is probably, that some high-performance computers, which use special allocators will use one day some of the codes we are sharing to the world.

That's why we want to avoid the use of overloaded operators, like these:

1
2
3
4
5
6
7
8
9
10
void*operator new[](size_t s){
    return malloc(s);
}
void operator delete[](void*p){
    free(p);
}
void operator delete[](void*p,size_t s){
    //It doesn't support objects with destructor.
    if(realloc(p,s)!=p)throw 0;
}

Instead of, use standard shrinkable arrays:

1
2
//Suggested standard syntax to shrink an array:
delete[10]pointer;

Shrinkable arrays, which if one day become standard, should allow the use of every special high-performance memory allocator.

http://ncomputers.org/content/code.php?src=suggestions/shrink%20arrays%20cpp

Best regards,

http://ncomputers.org
Last edited on
Note that 90% of times computers have enough memory and they can waste some gigabytes for a short time
And push everything into swap. Due to problems with growth coefficient 2, low efficiency of many small allocations and only 4 GB of memory, I managed to hang my PC when allocating total of 2GB. On the plus side if I would be patient and waited out this, result would show less than a second for vectors and about an hour for arrays.
Not to mention that for server applications you are usually have hard memory limit. Like 2 MB max per task.
Thank you for your comments, your answers and your time.

MiiNiPaa:
I agree with you.

keskiverto:
Lack of specific word is no proof.

Yes it is true. I re-read quick the post and found In-Place Resizing.

Sometimes I wish to have more free time to read every article you kindly share with us.

I read some parts of those you shared with us.

On the last one, I found the following:

The C++ language has allocators (refering to the concept, rather than specifically std::allocator), which allow specialization of allocation, but lose the advantages offered by realloc.
While actually moving memory, as offered by realloc, is often not possible in c++, realloc may extend or shrink without moving data, which is a useful addition to standard allocators.
Currently, this is just me thinking this is a good idea and I did not find a similar proposal on isocpp.org.

Thank you very much to share it with us!

Best regards,

http://ncomputers.org
I have updated the content in our website about this topic using all the knowledge, that we've collected thanks to your help! Thank you very much to all of you!
Topic archived. No new replies allowed.
Pages: 12