Efficient Int Values

My question is in regards to the most resource-efficient way of assigning values, or perhaps one that is the most secure if efficiency doesn't apply or isn't as important in this situation. An example of what I'm asking about is below, in two different forms. Basically asking which one is better.

P.S.: I know I'm not getting a remainder with my division. I also know it's not the most efficient way of writing this program. I'm just getting back into coding after being off of it for about a year (in school, but I've been taking classes for general education for about a year.)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
  int a;
  int b;
  int solution;

  cout << "Enter a number." << endl;
  cin >> a;

  cout << "Enter another number." << endl;
  cin >> b;

  solution = a + b;

  cout << "Added: " << solution << endl;

  solution = a * b;

  cout << "Multiplied: " << solution << endl;

  solution = a / b;

  cout << "Divided: " << solution << endl;

  return 0;



VERSUS


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

  int a;
  int b;
  int sum;
  int mult;
  int div;

  cout << "Insert a number." << endl;
  cin << a;

  cout << "Insert another number." << endl;
  cin >> b;

  sum = a + b;
  cout << "Sum: " << sum << endl;
  
  mult = a * b;
  cout << "Multiplied: " << mult << endl;

  div = a / b;
  cout << "Divided: " << div << endl;

  return 0;
Last edited on
There's really no difference at all. An optimizing compiler would produce identical code. Of course you could also do this (which again would probably boil down to the same assembly code when optimized) :
1
2
3
  cout << "Added: " << (a + b) << endl;
  cout << "Multiplied: " << (a * b) << endl;
  cout << "Divided: " << (a / b) << endl;

If you want to talk about security, then neither program is secure: Your program can invoke undefined behavior if your user input fails, or if the user enters the value 0 for b. This is because you have uninitialized values, and you divide by b without checking to see if b != 0, respectively.

Also, for security: If your user enters two values that will cause overflow when added or multiplied together, you will also have undefined behavior. So if you want to have perfect security, you need better input sanitization.

• Using an uninitialized variable in an operation is undefined behavior.
• Integer division by 0 is undefined behavior.
• Integer overflow is undefined behavior.
Undefined behavior is a security hole, because it causes your program to be able to do anything. In most cases, your program will just spit out junk or crash, but for a complicated program, undefined behavior could have devastating consequences downstream.


As for efficiency, in this excerpt, you will not notice any trace of a difference:

Pre-optimization is the root of all evil -Don Knuth

Technically, on a compiler that optimized literally nothing, your second program will use sizeof(int)*2 more memory, but this is so immaterial that it does not matter.

Readability and maintainablity is much more important than wringing out efficiency, in most practical cases.

So that then raises the question of which program is more readable and/or maintainable? For such a simple program, they are both about the same. But if you wanted to expand your program to do other operations beyond what you just did, your second program is more maintainable because you saved your previous operations instead of overwriting them.

In C++ (or modern C), you don't have to declare all your variables at the beginning of a function. You can declare them right when you need them, like this.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#include <iostream>
int main()
{
  using std::cout;
  using std::endl;

  int a = 0; // default value in case cin fails
  int b = 1; // default value in case cin fails

  cout << "Insert a number." << endl;
  std::cin >> a;

  cout << "Insert another number." << endl;
  std::cin >> b;

  int sum = a + b;
  cout << "Sum: " << sum << endl;
  
  int mult = a * b;
  cout << "Multiplied: " << mult << endl;

  if (b == 0)
  {
    cout << "Divided: error div by 0!" << endl;
  }
  else
  {
    int div = a / b;
    cout << "Divided: " << div << endl;
  }
  return 0;
}


Again though, this is such a small program that I'd say that doesn't really improve or decrease readability or maintainability.

Also, your operator on your second program's line 9 is wrong.
Last edited on
dumping all the printed text to a single buffer and then printing at the end may be slightly more efficient than multiple couts. Most real programs are not commandline so it does not matter too much on that too, but flushing the buffer over and over is slow. I once took all the prints (about progress, not the answers) out of a co-worker's math program that ran for 2 days and cut it down to 1/2 a day. He put them back in ... couldn't bear the silence ...

I'm on the fence about the readability vs speed topic. I mostly agree that readable is better, but I have a dozen commercial products (looking at you office 360) that are just plain sluggish. Stuff that I click on and nothing happens for 10, 15 seconds at a stretch... on a desktop that is effectively a supercomputer from not too many years ago. If I have to wait on your software in the current era, your software sucks.



Last edited on
Then, in both cases, that's not pre-optimization, and therefore not the root of all evil. :) You have a noticeable determent to speed, so I completely agree that that area of the code should be quarantined and optimized, with abundant comments to describe what optimizations are taking place. And this is Windows we're talking about, where things tend to slow down over time to the point where clicking on the Font selection menu takes 20 seconds, so I'm not defending that haha. The current FFTW library has abundant optimizations to try to wring out every clock cycle of speed, but I guarantee you that when it was first being developed, they made sure it was correct before they tried to greatly optimize

And true, beginners should know that, for example, std::endl flushes the buffer (and therefore can vastly decrease an I/O-bound program), and that tons of I/O in general can really affect performance. People should know about algorithmic complexity, and which operations will be slow if repeated a billion times, but for any large project, I think it's maintainability and source code complexity that overwhelms someone.

I'm definitely in favor of making sure your program works correctly before you try to optimize it. Since languages like C++ already take a long time to develop in, I think it's much more important to make sure it is correct before it is fast. After all, I wouldn't care how fast something is if it doesn't do its task correctly. But once it's correct, and it's properly profiled for performance, *then* start seeing what you can optimize.
Topic archived. No new replies allowed.