Rectangle Sum with array2D

I've been giving this code several tries but it keeps crashing.

Input 2 integers n,m, and the next n row input m integers each row (in other words, input a 2d array with n is row and m is column). In the last line, input 4 numbers x1,x2,y1,y2 that is the coordinate of the left top and right bottom of a rectangle (It's just like the coordinate of the 2d array). Calculate the sum of all elements inside the rectangle.
Example :
INPUT 2 3 // 1 1 9 // 8 2 9 // 1 1 2 2
OUTPUT 12
Explain: The coordinate of the rectangle is left top (1,1) and right bottom (2,2) so you print out the sum of a[1][1],a[1][2],a[2][1],a[2][2] - that is, 1+1+8+2=12.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#include<iostream>
using namespace std;
#define lli long long int
#define FOR(i,k,n) for (lli i=k;i<=n;i++) 
#define ifbr(i,n) if(i==n) break;
int main () {
    lli arr[999999][999999];
    lli row,col;
    lli sum=0; // My code crashed HERE and I don't know why
    cin >> row >> col;
    FOR(i,1,row) {
        FOR(j,1,col) {
            cin >> arr[i][j];
            ifbr(j,col);
        }
        ifbr(i,row);
    }
    lli a,b,c,d;
    cin >> a >> b >> c >> d;
    FOR(i,a,b) {
        FOR(j,c,d) {
            sum+=arr[i][j];
            ifbr(j,d);
        }
        ifbr(i,b);
    }
    cout << sum;
    return 0;
}
Last edited on
lli arr[999999][999999];

This is an attempt to make an array, in the stack, containing 999999 * 999999 = 999998000001 long long int values

Lets guess that one long long int is 8 bytes. So this array occupies

999998000001 * 8 = 7999984000008 bytes

This is approximately 8000 GB. This array takes up 8000 GB.

The stack probably has about 2 MB (megabytes) of space on it, and I expect your PC has 8GB of memory in total? Maybe 16GB? How about even 64GB? Not enough.


Also:
1
2
#define FOR(i,k,n) for (lli i=k;i<=n;i++) 
#define ifbr(i,n) if(i==n) break; 

WTF? Are you trying to write in some other language entirely, using the preprocessor to mash your code to fit ?
Last edited on
Thanks for that. My code successfully worked, but I didn't get the right answer. Can you fix that?
And btw does the #define preprocessor affect the main() execution? I've been using that for ages. It's like once upon a time I've taken a contest on Codeforces, and I looked at other people's solution and the pro was like 20 lines of #define. So I think that's a good idea xD

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
// fixed code (I still keep the lli define, though)
#include<iostream>
using namespace std;
#define lli long long int
int main () {
    lli arr[1000][1000];
    lli row,col;
    lli sum=0;
    cin >> row >> col;
    for(int i=1;i<=row;i++) {
        for (int j=1;j<=col;j++) {
            cin >> arr[i][j];
            if (j==col) break;
        }
        if (i==row) break;
    }
    lli a,b,c,d;
    cin >> a >> b >> c >> d;
    for(int i=a;i<=c;i++) {
        for(int j=b;j<=d;j++) {
            sum+=arr[i][j];
            if (j==d) break;
        }
        if (i==b) break;
    }
    cout << sum;
    return 0;
}


So input is 2 3 // 1 1 9 // 8 2 9 // 1 1 2 2. The output is supposed to be 12 but it turns out to be 2 instead.
I looked at other people's solution and the pro was like 20 lines of #define. So I think that's a good idea xD


Creating your own personal programming language to meet your own needs is fine for yourself. If you ever write code that anyone else will have to read or maintain, doing that sort of thing will get you beaten up in the car park. That person might have been a pro, but the code that person was was writing wasn't professional; it was personal.

#define changes your code before the compiler sees it.

You wrote this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#include<iostream>
using namespace std;
#define lli long long int
#define FOR(i,k,n) for (lli i=k;i<=n;i++) 
#define ifbr(i,n) if(i==n) break;
int main () {
    lli arr[999999][999999];
    lli row,col;
    lli sum=0; // My code crashed HERE and I don't know why
    cin >> row >> col;
    FOR(i,1,row) {
        FOR(j,1,col) {
            cin >> arr[i][j];
            ifbr(j,col);
        }
        ifbr(i,row);
    }
    lli a,b,c,d;
    cin >> a >> b >> c >> d;
    FOR(i,a,b) {
        FOR(j,c,d) {
            sum+=arr[i][j];
            ifbr(j,d);
        }
        ifbr(i,b);
    }
    cout << sum;
    return 0;
}



Your compiler saw this. Exactly this, just as if this is what you wrote yourself without any #define in it.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
int main () {
    long long int arr[999999][999999];
    long long int row,col;
    long long int sum=0; 
    cin >> row >> col;
    for (long long int i=1;i<=row;i++) {
        for (long long int j=1;j<=col;j++) {
            cin >> arr[i][j];
            if(j==col) break;;
        }
        if(i==row) break;;
    }
    long long int a,b,c,d;
    cin >> a >> b >> c >> d;
    for (long long int i=a;i<=b;i++) {
        for (long long int j=c;j<=d;j++) {
            sum+=arr[i][j];
            if(j==d) break;;
        }
        if(i==b) break;;
    }
    cout << sum;
    return 0;
}

That's all it does. #define here just rewrites pieces of your source code, just as if you had typed it yourself.
#define is NOT a good idea. It was used a lot in the early language and in C where we don't have some tools. It is difficult to debug macros, and constants are typed in unexpected ways (to someone not used to it), type and keyword swaps are making it hard to read for everyone else who knows the normal types/words but not yours... there are very few places where #define is a good idea, and those will come to you in time (eg, using the built in debugging macros can be helpful and building off those in a new macro can have merits).

Don't be a lemming. Just because some other idiot did something is no reason to copy off them. If they had a reason, then understand the why before you copy, and then you can do as they did with justification, OK. But don't just blindly follow.... not every self proclaimed expert is one, and not every piece of code on the web is worth copying. There is code on the web from the 90s, you know.... it may work, but its not a great way to do things anymore!
Pardon me for piling on here, but I'm going to expand on @jonnin's and @Repeater's points about defines.

One in a while someone gains a bad idea that just seems to stick in their mind. I offer this on the motivation of trying to counsel someone studying the language in order to be helpful, but I can't help but use a tone that sounds scolding. I ask for a pardon, though, because I'm an old hand at this and the technique is rather offensive :)

I note that it was "handed" to you, through your review, in a contest.

It sounds like you picked up a technique in a contest where the original "professional" (and we've all used that term ironically) was probably motivated for speed of typing code, which echoes Repeater's point that this was at the expense of readability, and common sense if not logic and reason.

Let me focus on this:

FOR(i,1,row)

In this use, no one reading the code can tell how the test in this loop is implemented, or how the row is incremented, or if "i" is indeed local to the loop. It also only works for the "long long int" type, which I admit is a lot of typing for an integer of that form. It obscures what is being said.

I've been a developer for decades, and I operate my own firm. I've hired hundreds of programmers in my career. In case there is any question about the wisdom of using a define in this way, let me spell it out in stark terms. This is a job ending practice. I might issue one and only one warning, but a persistent reliance upon macros (the defines) is a reason to fire a programmer. I just could not accept the corruption this puts in the code base for the reasons given by Repeater and jonnin, it ends up wasting everyone else's time - and that's money.

There are better approaches, too. Modern C++ concepts express these ideas far better than this.

Take "lli":

#define lli long long int.

The old fashioned, built in C types are a problem. They are not clearly defined. They change sizes depending on the compiler and the platform target. Even when targeting a 64 bit build on the same operating system, one compiler may set the "int" as 64 bits, while another is 32 bits.

It isn't that these are "bad", but they are not explicit. Even a "char" type is not always 1 byte long. Though, frankly, in 40+ years I've never seen common use where it isn't, there are "exotic" situations where it may be 2 bytes. To combat this, there are new types which make these universal.

int64_t, for example, is always a 64 bit signed integer. To know the actual size of the "long long int", I'd have to look up how a particular compiler set for a particular target interprets that, but I can rely on the fact that "int32_t" will be 32 bits, and "int8_t" will be 8 bits.

So, even the short acronym of the define, "lli", echoes the ambiguity known in the language.

It could also have been done with a typedef

typedef long long int lli;

The difference, however, is in several layers. First, this isn't a mere text substitution. This means "lli" is a recognizable type. This technique is widely exampled, without using defines. In many math oriented systems, there is some type of "Real" created, like:

typedef double Real;

This is done because the code itself can now be written to use the "Real" type to represent some kind of floating point (or even fixed point) underlying type, and mutate as required, merely by changing the typedef.

While the "define" could seem to do that, it fails in important ways.

Macros (the defines) do not respect namespaces, but typedefs do.

Thus, if I combine libraries for different math purposes, and if they both define a typedef for "Real", there would be no confusion because the typedefs can be wrapped within namespaces:

1
2
3
4
5
6
7
8
9
10
11
namespace A
{
 typedef double Real;
}

namespace B
{
 typedef float Real;
}

// A::Real is a different type than B::Real 


Note the comment at the end of that example. This is how professional developers do this kind of thing. I've used the older style (there are newer "using" directives with more features, a kind of template typedef is available there - typedefs that take parameters to finalize the actual type expression).

This is why a macro like "lli" in your example is so egregious to us, because the exact same "lli" text would be so much more feature rich, and language compliant, within the language. "define" is, essentially, a text substitution before the language. Where a typedef is a C/C++ language construct, "define" actually isn't. "#define" is a brutal, coarse "rock tied to a stick" type of hammer.

So, now, back to:

#define FOR(i,k,n) for (lli i=k;i<=n;i++)

Clearly, the "pro" you observed was trying to avoid repeated "<=" and "=" and "++" keystrokes, as well as that repeated "long long int" text. If under a timed contest, this could be a speed asset. Fine. Just, don't learn anything else from that.

Next, however, is the storage approach. This code allocates room on the stack

1
2
  lli arr[1000][1000];
 


The larger that gets, the worse this is. The stack should never be used for such large storage. The stack is a limited resource, and this is...just lazy.

More to the point, however, is that the code accepts parameters that will likely use a small subset of this (who would sit there and type a million entries to fill up to 1,000 rows with 1,000 columns each?).

There are various ways to do this dynamically.

vector< vector< lli > > arr;

There are slight inconveniences, like having to create each row, but it uses only the standard STL.

However, there really is no need to use 2d (or other multi-d) containers in most cases. Linear algebra uses typical 3x3 and 4x4 matrices, and those have specific math algorithms associated with them, so there's a valid exception. If, however, you're just making a 2d grid, it can be fashioned out of a simple array or vector with a touch of math.

unsigned pos = r * rowsize + c;

This little thing gives a pos such that a single dimension array operates like a 2d array of rows and columns. It is "row major", meaning the storage is organized as a series of rows. It could be reversed:

unsigned pos = c * colsize + r;

This is less common, but organizes storage as a series of columns.

The point here is that there are multiple approaches, many of which are quite liberating.

Boost has multi-dimensional containers, which can have "views" in them (a rectangular sub-region of the original 2d matrix).

You fashion a kind of view by taking in "a, b, c, d" from which you form the sum.

Imagine a container that let you lift a view of this "rectangular sub-region" of the data, then just ask for the sum.

Boost has one. If you used that, it would be much more feature rich.

There are so many ways to do this that I dare not launch into example (I was about to, but I have to get back to work on my car).







Topic archived. No new replies allowed.