Memory Fragmentation in C

Hi,

I am supposed to write a program in C that:

Allocates memory for a sequence of 3m arrays of size 500,000 elements each.
The program then deallocates all even numbered arrays and allocates a sequence of
m arrays of size 700,000 elements each.

I am supposed to measure the timings for the allocation of the first sequence
and then for the second.

When I run my code; I get some weird timings:
Sometimes it reports that both allocations took 0 ms. Other times it reports
that it took 16 ms to allocate the first and 16ms to allocate the second.
The result I get most frequently is that it took 16ms to allocate the first
and 0 ms to allocate the second.

Am I doing something wrong here, or is that a normal result?

If it is a normal result, what does it mean?

Here is my code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
#include <stdio.h>
#include <stdlib.h>
#include <conio.h>
#include <Windows.h>

int main()
{

	//FIRST SEQUENCE
	int *a, *b, *c, *d, *e, *f, *g, *h, *i, *j, *k, *l, start, end;
	start = (int)GetTickCount();
	a = (int *)malloc(500000 * sizeof(int));
	if(a == NULL)
		exit(1);
	b = (int *)malloc(500000 * sizeof(int));
	if(b == NULL)
		exit(1);
	c = (int *)malloc(500000 * sizeof(int));
	if(c == NULL)
		exit(1);
	d = (int *)malloc(500000 * sizeof(int));
	if(d == NULL)
		exit(1);
	e = (int *)malloc(500000 * sizeof(int));
	if(e == NULL)
		exit(1);
	f = (int *)malloc(500000 * sizeof(int));
	if(f == NULL)
		exit(1);
	g = (int *)malloc(500000 * sizeof(int));
	if(g == NULL)
		exit(1);
	h = (int *)malloc(500000 * sizeof(int));
	if(h == NULL)
		exit(1);
	free(b);
	i = (int *)malloc(500000 * sizeof(int));
	if(i == NULL)
		exit(1);
	j = (int *)malloc(500000 * sizeof(int));
	if(j == NULL)
		exit(1);
	k = (int *)malloc(500000 * sizeof(int));
	if(k == NULL)
		exit(1);
	l = (int *)malloc(500000 * sizeof(int));
	if(l == NULL)
		exit(1);
	end = (int)GetTickCount();
	printf("%d\n", (end - start));
	getch();
	free(b);
	free(d);
	free(f);
	free(h);
	free(j);
	free(l);
	////SECOND SEQUENCE

	start = (int)GetTickCount();
	b = (int *)malloc(700000 * sizeof(int));
	if(b == NULL)
		exit(1);
	d = (int *)malloc(700000 * sizeof(int));
	if(d == NULL)
		exit(1);
	f = (int *)malloc(700000 * sizeof(int));
	if(f == NULL)
		exit(1);
	h = (int *)malloc(700000 * sizeof(int));
	if(h == NULL)
		exit(1);
	end = (int)GetTickCount();
	printf("%d", (end - start));
	getch();
	

	return 0;
}


Thanks.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms724408%28v=vs.85%29.aspx
The resolution of the GetTickCount function is limited to the resolution of the system timer, which is typically in the range of 10 milliseconds to 16 milliseconds.
Hmm, so is there a better way to get the timings for this?
The accuracy of QueryPerformanceCounter() is limited only by how much the function call itself takes and by the CPU's frequency.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904%28v=vs.85%29.aspx
Topic archived. No new replies allowed.