Operater functions efficiency

1) When I was taking a course on advanced c++, one of my peers asserted that it is three times more efficient to multiply than divide. Basically...
(num * .2) is three times more efficient than (num/5)
Is there any truth to this claim? I have done some basic programming, and it is definitely true that division is less efficient due to rounding, but is it that significant?

2) I know that using bitwise operators for division is more efficient, but how much more efficient is it? Would it be at all worthwhile to construct a system to use bitwise division?
(num >> 1) == (num/2)

3) Another one of my peers asserted that using bitwise operators on some sort of a string can be significantly more efficient then simply declaring an array of Booleans? Is there any truth in this claim, if so, how would one go about implementing this?

If anyone has any insight on these questions, please let me know.
Last edited on
3) That is true, a bool has 1 byte size. So if you want an array of 16 bools, you'll be using 16 bytes.
With a char array you can use each bit as a single bool, so an array of 16 bools would take 2 bytes ( if a byte is 8 bits )
Then you'll be accessing those values with bitwise operations.
The problem with this is that a byte may not be 8 bits, a more portable way is using a std::bitset which is a container designed for this:
http://www.cplusplus.com/reference/stl/bitset/
Also std::vector is specialized to be more efficient with bools
http://www.cplusplus.com/reference/stl/vector/
1) Yes, multiplication is much more efficient than division.

2) It is not more efficient than the computer can do it. (The hardware does everything using bitwise operations -- including division.)

However, a shift operation incurs less overhead than either a full-blown multiplication or a full-blown division operation. That is, certain individual machine instructions work in less time than others. Division is simply a more expensive operation than a simple shift. So if all you want is an integer division by two, you can squeeze out some cycles using a shift instead of a full division. This is only useful in places where the optimization makes sense.

Whenever possible, a good compiler will optimize divisions by powers of two into simple shifts. Hence, don't waste your time second-guessing the compiler's ability to use the hardware. Write your code the correct way and let the compiler squeeze the best performance out of it for you. Only if testing shows that the compiler consistently fails should you resort to various "I'm the compiler" hacks.

For more on digital multiplication and division, see
http://en.wikipedia.org/wiki/Multiplication_algorithm
http://en.wikipedia.org/wiki/Division_%28digital%29

Hope this helps.
Duoas' comments are spot on. Write the code as cleanly and directly as possible. The compiler is responsible for optimizing the code. If your code is too slow and you see some things that can be manually optimized, do so only then. Modern compilers are often surprising in what they can optimize. So much so that trying to outsmart a compiler will often hurt more than help. This is very much the case with arithmetic these days, since so much can be done in the SIMD units.

Know when to code for maintainability and when to code for performance. That added performance may come with a penalty that costs more in the long run.
Thanks very much. All my questions were answered and then some!
Personally - I think think is meaningless bollox:

1) When I was taking a course on advanced c++, one of my peers asserted that it is three times more efficient to multiply than divide. Basically...
(num * .2) is three times more efficient than (num/5)
Personally - I think think is meaningless bollox:


Alright, then what is your take on it?
What they probably mean is that is is a damn sight faster to multiply two numbers together than to divide them and is thinking of that as efficiency.

They are two different operations and it is not really comparing like for like.
Bear in mind that floating point aritmetic is much different from integer arithmetic. Multiplication and division on floating point is a much different process than it is for integer.


1
2
int foo = 5;
foo = foo * 0.2;

This converts 'foo' to a double (not as straightforward as you might think), multiplies, then converts back to an int. AFAIK, this would be less efficient than / 5 because of the 2x conversions.

On the other hand... if you cut the conversion out:
1
2
double foo = 5;
foo = foo * 0.2;

This is probably exactly the same as foo / 5. And even if you do /5 and it turns out the be slower, the compiler could optimize it and replace the /5 with *0.2. So it's not something worth stressing about.


As has already been said -- just do whatever is easiest and most straightforward to understand. Let the compiler do the optimizing. No need to micromanage. Often times you'll end up shooting yourself in the foot.

Case in point: Way back when multiplication was really slow and games had to multiply by 320 (for pixel plotting with a screen width of 320). Coders often did (y << 8) + (y << 6) instead of (y * 320).

Later, once multiplication on CPUs got faster, the << "optimization" actually turned out to be slower.

These days -- computer processing is so freaking fast, it hardly matters, as the program speed is much more likely to (read: virtually always going to) bottleneck elsewhere -- like memory accessing.
Last edited on
Actually, no. Division and a few other operations are slower. double(x)*.2 doesn't take the same amount of time as double(x)/5.0.
If that's true, then I would be flabbergasted if a compiler wouldn't make that optimization.

It seems like an obvious optimization to me:

x/5.0 -> x*1.0/5.0 -> x*0.2
Did anyone actually test this, or look at some assembler output to see what
instructions are generated? Seems like a simple CPU cycle count would prove
it one way or the other.

At any rate...

If num is of floating point type, then fine -- multiply is faster. However, if num
is of integral type, then the two expressions yield different answers -- the
divide yields an integer result and the multiple a floating point. In this case,
it seems like there is a clear-cut "best" solution -- either you want integer
truncation or you want a floating point result.

Topic archived. No new replies allowed.