Hi guys I have come to learn that the first example is not good practice and on some compilers will not compile because n is not known at compile time
I understand that is not ok but how come we can dynamically or do the exact same thing when creating an array on the heap,why is that considered ok?
after all we still don't know the number at compile time?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
int n = 0;
cout << "enter number of elements you want" << endl;;
cin >> n;
int numbersOne[n]; // not ok
int n2 = 0;
cout << "enter number of elements you want " << endl;
cin >> n2;
int *numbersTwo = newint[n2]; // ok
but in general my guess after a bit of research is that the first example is statically compiled or the stack is a fixed size when you compile it so theres no way to know how big the stack frame will be
where as the second is taking from the heap,the heap which is always available to programs and has much more space and (probably can hold that amount if memory,
Like everything else, don't drink and drive. The stack is a precious resource in the C/C++ virtual machine.
C99 uses alloca() to implement dynamic array sizes. Dynamic array are not allowed in C++, but you can call alloca(), and of course, there's std::vector<> which uses the heap.
GCC is very (and too) flexible in its mixing of language C and C++ features across standard versions. You have to mix -std= and -pedandic to try to lock it down. As such, you're experiencing the variable sized array feature in C99 in a C++98+ program. I suspect you're using GCC without specifying the language and version, and if you are, that's why your code compiles.
You have to be really knowledgeable and careful to be able to write exception-safe code with raw pointers. Would smart persons do all that work when they know that they can achieve the same by doing almost nothing if they use std containers and smart pointers?