Just traverse the array (once) and keep a rolling update of first and second largest. If you are lucky with the array (e.g. everything after the second element is less than either of the first two) it will take N-1 comparisons. If you are unlucky with the array it will take 2 comparisons per new element and 2N-3 comparisons in all.
Sorting the whole array would be O(N.log N) at least. If you insist on any sorting use std::partial_sort so that you only need the 2 largest at the front and it will be O(N). Any form of sorting would obviously rearrange the array.
This method does it in 1.5 comparisons, it compares every 2 elements in the given array, and creates 2 new vectors for the higher and lower elements, then finds the greatest element in those 2 arrays. Each sort requires about n/2 comparisons. The total comparisons is 3*n/2, (3/2)n comparisons.
Are you sure? If you take them in pairs like this, then the largest and second-largest might both be in the maximums array if they came from different pairs. Then line 59 prints the wrong answer.
You are going outside array bounds anyway. If index goes up by 2 each time, then your loop starting on line 20 will leave you looking for element 2*arrayAmount-1.
theoretically you can do it with 0 comparisons in a couple of ways (for integer numbers), but in practice ?? Not sure if its possible. In theory, you can just use your infinite memory to do a O(N) no comparison at all sort and peel off the top 2, or a hashing algorithm that picked target locations based off key values in an infinitely long target.
... you could make a new vector of the values %10, %100, %1000, ... to get a vector of the largest candidates, and then subdivide that one. this isnt LESS WORK, its LESS COMPARISONS. Its probably much, much, much more work. This one does not help if all the numbers are identical. I am not sure what to do about that kind of edge case.
there are probably a couple of c++ exploitable syntax tricks that simply avoid explicit comparisons. The only example I have handy of that is my integer power routine's core:
result *= lut[!(e&1)][0]; p *= p; //the comparison and jump based off whether the bit in the power is zero or 1 is avoided with some seriously messed up gibberish and a lookup table with 2 pointers. With this sort of nonsense it may be possible to do it in zero just by exploiting the word 'comparison' -- I don't compare anything, but instead have a cooked up index to generate a zero or 1 result (false, true) that then gets the (value if false, value if true) value from a 2 element vector.
Shezshade, what does your code do for # of comparisons if ALL the values are identical?
OK, split them in pairs as you are doing (or would be once you fix your loop). That is N/2 comparisons.
Then find the top two in array maximums[]. That is just under 2*(N/2) comparisons. Finally, compare the second-top here with the single element in minimums[] that got displaced by the overall maximum and might possibly be the overall second place. In most cases, however, your second-to-top will not come from array minimums[] as you imply on line 59.
That does indeed seem to come to about 1.5N comparisons in total, albeit at the cost of doubling the amount of memory used.