a C++ algorithm problem- skyscrapers

Pages: 12
Hi can someone plz take a look at this skyscrapers problem in SPOJ:
http://www.spoj.pl/problems/CEPC08B/

The time limit is although 10s, but still not enough for the given ranges. We need a really efficient algorithm, i tried using 3 different algos,but everyone exceeded the time limit !

Can some one tell me an efficient algorithm to solve this problem ??
Last edited on
You can't solve this in under 10 secs? Are there any constraints on the power of the CPU? This should be solvable in any modern desktop in under a second without a fancy algorithm.
What did you try?
hey common..did u see the ranges...
each test case contains as many as 1000000 buildings&1000000 days and the height of each building can be as much as 1000000000 units(which means the upper limit of days also).

so now u see...anyways,i suggest u try it out urself and see if ur solution is accepted....plz chk it out
Last edited on
It says here t<=15 test cases, n buildings, and d<=1000000 days. This means that the longest the entire calculation could take is (time for one day)*15000000. In order to exceed the limit, each day would have to take longer than... 1/3 of a microsecond? Is that even possible?
dont be so sarcastic helios xD...
Actually, I was being totally serious. 1/3 of a microsecond for something that requires an unknown amount of comparisons (there can be any number of buildings, according to that page) is very little, I think.

I suppose there must be a solution in logarithmic time...
hey helios, i really appreciate ur detailed analysis of this problem.. but i'd suggest y dont u go on & solve the problem & try to submit it , n see if it passes the time limit..

besides,i'd like to make some correction in ur analysis..

i) the number of days <= 10^6 BUT the length of day is <= 10^9.

ii) ur evaluation is correct that the total time wud b around : (time of 1 day)*15000000 , however, the TIME for calculating the ans for 1 day again depends on how many times do u hav to go thru the loop ( which depends on the algo u chose).And that'll easily take the time beyond the limit.

So try it out first !

As much as this may surprise you, some people are actually busy, and don't have the time to join contests.

A simple algorithm involves going through the number of buildings once, seeing which are above the water (a building is above the water if its height is greater than the day), and counting regions.
I can't imagine how could this loop be done more than once, or what it means for a day to have "length", as the region count is supposed to be done just once a day.

There is a fourth number, but it doesn't say what it means. It just says that it's inputted.

Finally, I said that, with the greatest number of test cases, days, and height of buildings, the average maximum time for each day without passing the limit is 1/3 of a microsecond.
Of course each day could take more or less, but the calculation is for average time, not actual time.
This algorithm, though is simple, but is the most inefficient solution of this problem. I'll bet this algorithm can never pass the given time constraints.

Helios, I think u've overlooked one of the most important aspect of this problem. For any given test case, the number of days(for which we have to calculate regions) can be as much as 10^6. Now, if you are trying to run the loop each time FOREACH of the day FOREACH testcase, then you'll end up nowhere. Because, the total number of comparisons will be :

(no.of testcases)*(no.of days)*(length of each forloop) = 15*10^12 + (additional comparisons required for separating regions).

NOW an average 2 GHz processor can perform 14500 million instructions per second , which means that one instruction takes 69*10^(-12) seconds.

NOW EVEN IF WE IGNORE THE *ADDITIONAL* comparisons, the TOTAL TIME WILL BE = 15*10^12*69*10^(-12) = 1035 seconds !!!!



reference of MIPS value of 2 GHz processor : http://en.wikipedia.org/wiki/Instructions_per_second#Timeline_of_instructions_per_second
Last edited on
Why are you reacting like that? I'm not arguing with you.
I'm saying what you're saying. In average, each day (regardless of the operations inside) can take no more than a third of a microsecond. Got that? The calculation of each day. I didn't add more factors because: a) the problem doesn't give an upper limit to n (the number of buildings). and b) there is another variable of unknown meaning. What the hell is t[1;d<10^9]? It says d is the number of days, so what does td do?
I did say I considered that impossible in linear time (O(n)).

After re-reading it, I understand what ti means. In that case, each day should take no more than 1/3 of a nanosecond. Now, that is impossible in linear time.
Last edited on
Helios, I don't know why you're even bothering to help this guy. Let him figure it out on his own with that attitude. You're taking time out of your day to assist. He needs to show a little respect!
ohk ohk...my apologies...sorry if i was too rude...didnt mean to..
anyways, its fine.. i'll try to figure it out on my own..thanks for ur help...
i just came here to discuss bcoz i already tried 3 algorithms :
1) the simpleone which u told
2) using recursion
3) using dynamic programming...
everytime,i made it more efficient bt stil it was "time limit exceeded" in each case.
so i thot of discussing wit u ppl...bt thr's no point in arguing on an algo question..
it wud hav been gr8 if u cud try it out urslf...it wud tak just 15-20 minutes to solve tht problem & get it submitted...bt since ur a busy man,i understand..

bt thnks for ur help, anyways.... :)
We need a really efficient algorithm, i tried using 3 different algos

Which algorithm (paradigms) with which (asymptotic) running time did you try? Looking at the problem, a plane sweep seems intuitive, but divide&conquer should work nicely, too. (Actually, the "Watershed Transformation" from image processing, namely segmentation, is a similar problem in two dimensions (with several water sources). You might find efficient algorithms and implementations under that term).

Looking at an upper bound, let's denote the number of buildings b, the number of days d, and the number of segments s. Then we can determine the order of event points in a plane sweep algorithm in O(b*log(b)). For each event point we have to access the region information. Given a suitable DS this is possible in O(log s). We have to do this for every event point, in the worst case this is b. So we have, all in all, an O(b*log(b)+b*log(s)) upper bound. However, s is obviously bounded by b, so the bound becomes O(b*log(b)). Um, the number of days isn't even important, so these "The calculation of each day" statements are... useless. (As for a lower bound, Ω(b) is pretty much obvious. I'm quite sure that it is Ω(n log(n)) (otherwise, my outlined algorithm would be suboptimal, which cannot be as it is from myself, end of proof!).

On a personal note: why won't people accept that Big-Oh-calculus is more important than stupid "can-do-n-calculations-per-second" stuff when it comes to efficiency? In the time looking that crap up you could have solved the problem. And in practice you won't get these values for your problem anyways.

Oh, and Return 0: this guy has a problem and the answers he gets are unusable... wouldn't you get angry if people would react this way to your problems? I know I would...
hi thnks for ur help....

u wud be surprised to know that my implementation using dynamic programming was very much similar to what you said, with the complexity of O(n log(n)).

But I'd like to make a counter-point of ur saying: "why won't people accept that Big-Oh-calculus is more important than stupid "can-do-n-calculations-per-second" stuff when it comes to efficiency?"

My point is : Complexity(Big-O) represents the "efficiency" of a problem(i.e. how much the program's running time wil increase, when inp. is increased)... however, that is very different from "performance" of a program ( which actually represents the running time for a fixed input).

What I mean to say is that the Big-O notation IGNORES the *constant factor*.
I've seen algorithms with lower complexity but poor performance on normal inputs.

Take HeapSort only, it has a worstcase complexity -O(n*log n).BUT, quicksort having worstcase O(n^2), is preferred over HeapSort due to the "SMALLER CONSTANT FACTOR" hidden in quicksort complexity.
Definitely, HeapSort is gud for large inputs, but for average or smaller inputs, quick sort is better.
Here's my code-

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
VI calcRegions(VI hts,VI dts)
{
                 int cur=0;
                 VI dp,ans;
                 dp.assign(1000000001,0);
                 REP(i,hts.size())
                 {
                    
                                  if(hts[i]>=cur)                            
                                   cur=hts[i]-1;
                                  else
                                  {         
                                      for(int j=hts[i];j<=cur;j++)
                                       ++dp[j];
                                      cur=hts[i]-1;
                                  }
                 }
                 REP(i,cur+1)
                  ++dp[i];          
                 REP(i,dts.size())
                  ans.pb(dp[dts[i]]);
                 return ans;
} 


hts - vector containing the heights of the buildings as given in a testcase
dts - vector containing the dates for which we have to find the regions

ans - solution vector containing the "number of regions" on each day of the dts vector


As u can see, I've to go through the forloop only once, and yes, the number of days isnt important, bcoz going thru the loop just once, i was actually able to find out the number of regions of every possible day...

Now u wud also notice this : "dp[1000000001]" ... this is too big for an avg. pc to handle and so this program wil consume very large memory (bt thr are no constraints on memory)...

But still, i implemented the same algorithm using maps and i found out the solution of only "required" days and not "Every possible" day. the memory
consumption was reduced dramatically, even the complexity was increased, bt still not fast enough to pass the time constraint of 10s. Looks like the constant factor is too big !!

Can u find out errors/improvements/suggest a new precise algo ??
Meanwhile, i'll google on Watershed Trasnformation as u said.

Really, thanks for ur assistance !! :)
Last edited on
I agree that there are (rare) cases where the O notation isn't very helpful - e.g. the simplex algorithm actually being efficient despite its O(n3) time complexity, or the linear-time polygon triangulation with a ~10150 constant (by Chazelle, the Clarkson/Seidel O(n log n) algorithm is faster for any polygon that fits into your RAM, that's for sure).

But in this case there are no hidden constants, and the O notation gives better results than your 2GHz-assumptions. (Or do you know what exactly is performed (data-)parallel, what kinds of pipelining optimizations can (not) be performed, if the amount of operations includes comparisons or only arithmetic operations, what the impact of the cache size and data locality is, how fast your FPU is and what can be performed parallel on it while the CPU does something else, ...? The factor you will be off is most likely > 100. The factor you will be off with the Big-Oh-calculus is in the same order, but it is (a) comparable (b) independent of hardware assumptions and (c) you see that the number of days is not important, while you were needlessly arguing with this value above. Plus, if it isn't "good enough", you can always count calculations, conditional statements etc. seperately. Or you could approximate constants. No matter what, you'll have something better than your wild guessing.

Take HeapSort only, it has a worstcase complexity -O(n*log n).BUT, quicksort having worstcase O(n^2), is preferred over HeapSort due to the "SMALLER CONSTANT FACTOR" hidden in quicksort complexity

What you describe has nothing to do with the "hidden constant", you compare different scenarios. Quicksort has an average case upper bound of O(n log n), and the O(n2) case occurs with a probability of 1/n! (independent of the input, since the median determination it is based on the Monte-Carlo-method). Therefore it has about no impact on the average case, which is log(n!) = O(n log(n)) and occurring with a probability of ~1-1/n!. If you compare, please compare the same scenarios (and yes, QS is worse than HS in the worst case scenario. But not only in the Big-Oh-notation, also in reality).
In the analysis I gave, these cases are pretty much the same (worst case: all buildings have a different height, best case: all buildings have the same height, average case: the heights are uniformly distributed over the given range, which will result in ~b2/(max height-min height) collisions - to few to mention in the given ranges, i.e. same as worst case).

If I read it right, your algorithm has one loop running b times, and an inner loop running "height" times, which might be a (quite large) value. It's obvious that this is painfully slow (as you will see if you perform some Big-Oh-calculations...).
yes ur right...my point was same only that not always can the Big-O notation predicts the "performance", but in most cases,yes it does. So its fine.

The outer loop is running hts.size() times which is nothing but the "number of buildings" - b.

The inner loop operates ONLYIF there is a DROP in height of the building as we move along and it runs the number of times the value of that DROP in height.

So the worst case is O(b^2) while the average case looks like O(b*log b) to me (correct me if i'm wrong).

But still the program doesnt passes the 10s line. Can u suggest some improvements/modifications ?
Compile with optimization turned on.

Unroll your first loop into two loops to avoid the if-else check for every iteration.

use vector::reserve() before all the push_backs.

++j instead of j++

...
You give me an analysis then. How often do the loops run? (i.e., given the values b, d, s and h (for height), how often is, say ++dp[j]; executed?) I was pretty sure that hts.size() is the size of the "vector containing the heights of the buildings as given in a testcase", which I denoted by the number of buildings b. (On another note, I highly doubt that your algorithm is even correct, and whatever "REP" is, as long as you don't comment/explain it or give its implementation, there is no telling. But as I said, I highly doubt it).

So the worst case is O(b^2) while the average case looks like O(b*log b) to me (correct me if i'm wrong).

Um, it's your algorithm. You go and analyze it. (Hint: wild guessing isn't considered a legitimate method in complexity analysis. I "guessed" of the lower bound mainly because I suppose that you don't know Ben-Or's theorem or the theory behind algebraic decision trees, which makes lower bound argumentations somewhat difficult. But that doesn't mean that you can save yourself the work of going through your algorithm and actually determining its complexity, if you want to improve it).

Edit
@jsmith: I don't want to be rude or something, but perhaps you should take a look at the thread before writing a reply...
Last edited on
I did, exception.

The number in the original post was off by 2 orders of magnitude. Compiling with optimizations will likely by back 1/2 to 1 order of magnitude.

For the rest it seems like a vector is the wrong kind of data structure. Seems to me like a tree representation might be in order.
Pages: 12