Using Macros to optimise if-statements?

I was reading something on another forum, and came across this:

"...Also, if you can work out how to use macros like the likely/unlikely which we are now using at FluffyLogic (and I don't understand in the slightest... Something about magnets and magic, I think...), you can cut the memory and cpu requirements of any if statements where the conclusion is 'likely' to be be one rather than the other."

What does this person mean by that?
Last edited on
It means the writer is either being smug or he doesn't know what he is talking about (or both).

Modern CPUs perform an optimization on branches (if statements) by trying to track which branch (then or else) is most likely to happen, then begin processing that branch before the actual test is done. (If this sounds like magic, it is.)

Writer is trying to play optimizer instead of leaving it to the compiler and/or CPU. Only in very few use cases does the programming team need to worry about this.

Is writer working on a fuzzy logic system?
I wouldn't know if he is working on a fuzzy logic system (whatever that is) but thanks for the information.

I've done some googling and it appears it's used extensively in the linux kernel, but little mention of it elsewhere. Interesting to know about but not something I need to concern myself with.
The likely/unlikely macros don't exist unless you create them, and doing so is compiler-dependent. For gcc, you would write:
1
2
#define likely(x) __builtin_expect(x, 1) // Expect x to evaluate true
#define unlikely(x) __builtin_expect(x, 0) // Expect x to evaluate false 

Like Duoas said, the 'expect' builtin function (in gcc) used for branch prediction. The macros are used a fair bit in the Linux kernel (I don't know if that's the origin of the names likely and unlikely for the macros, but it's the first place I saw them), but with the proviso that the user actually profiles their code to ensure there's an actual benefit. Usually the compiler or CPU will be better at branch prediction that you, and you'll often find that you're wrong about the bottlenecks in your code, which is why you should use a profiler when you go to optimise your code. The profiler tells you in which routines the program spends most of its time (supposedly 80% of runtime is spent in 20% of the code), so those are obviously the ones that need to be optimised most.

As for fuzzy logic, I don't have the most in-depth understanding of it, but it's a non-binary logic system. "Non-binary" means that there are more than two polar truth values: binary logic systems like Boolean logic have only true and false (or 0 and 1). In fuzzy logic, truth values are expressed as real numbers between 0 and 1. In other words, binary logic is black-and-white while fuzzy logic has shades of grey. It's like the difference between digital and analogue electronics (you could say they're analogous comparisons). I'm not sure whether the truth values in fuzzy logic represent probabilities (e.g. "the particle is at (x, y, z)" has a probability of truth when the particle is too small to measure its exact position) or degrees of truth (e.g. "the bottle is full" is half-true when the bottle is half-full).
Last edited on
Topic archived. No new replies allowed.