In the first, you're declaring a constant and giving it a value as you delcare it.
In the second, you're declaring a constant with no value. Then you're trying to assign something to it, which is illegal.
You can't put the constant definition inside the class like that either, it'd need to go outside of it. Or static. I think there might be some sort of C++11 way of doing it non-static, but I couldn't be certain.
I suppose this code would be more akin to first example?
No, that's really another animal. In both cases, the Data instance is allocated on the stack. In the latest example, dataStore is allocated from the heap.
Would using something like this have any repercussions in regards to processor strain?
I try not to laugh at questions here, but that one made me chuckle. :) No, allocating something on the heap isn't going to strain the processor. Yes, it does take a few more CPU cycles that the direct allocation on the stack. Even if you're constructing that object millions of times, the difference is so slight you're not likely to notice.
What is critical about the dynamic allocation (new), is that you must always release the dynamically allocated memory in the descructor or you will have a memory leak and will run the risk of exhausting memory available on the heap.
Would creating a vector of a billion++ elements cause more strain than a simple array with the same number of elements?
A billion? Well now we're talking some possible strain. :)
Here's why and how to avoid it. A vector contains an internally allocated dynamic array. Since vector doesn't normally know in advance how many entries there are going to be it makes a guess. That number is implementation defined, but is usually small (16). When you attempt to push the 17th item onto the vector, the code has to allocate a larger instance of the array (you now have two arrays), copy the existing array to the newly allocated one and then releases the original. This can get quite tedious if you're pushing a billion items (assuming you have the memory to support it). Fortunately vector provides a way to avoid this if you have a good idea of how many items you're going to have in advance.
This will cause the internal array to be allocated one with sufficient room for 1B items. You'll avoid any reallocation and copying unless you exceed 1B items. Keep in mind that the amount required of memory is 1B * sizeof(Item).