The likely/unlikely macros don't exist unless you create them, and doing so is compiler-dependent. For gcc, you would write:
1 2
|
#define likely(x) __builtin_expect(x, 1) // Expect x to evaluate true
#define unlikely(x) __builtin_expect(x, 0) // Expect x to evaluate false
|
Like Duoas said, the 'expect' builtin function (in gcc) used for branch prediction. The macros are used a fair bit in the Linux kernel (I don't know if that's the origin of the names likely and unlikely for the macros, but it's the first place I saw them), but with the proviso that the user actually profiles their code to ensure there's an actual benefit. Usually the compiler or CPU will be better at branch prediction that you, and you'll often find that you're wrong about the bottlenecks in your code, which is why you should use a profiler when you go to optimise your code. The profiler tells you in which routines the program spends most of its time (supposedly 80% of runtime is spent in 20% of the code), so those are obviously the ones that need to be optimised most.
As for fuzzy logic, I don't have the most in-depth understanding of it, but it's a non-binary logic system. "Non-binary" means that there are more than two polar truth values: binary logic systems like Boolean logic have only true and false (or 0 and 1). In fuzzy logic, truth values are expressed as real numbers
between 0 and 1. In other words, binary logic is black-and-white while fuzzy logic has shades of grey. It's like the difference between digital and analogue electronics (you
could say they're
analogous comparisons). I'm not sure whether the truth values in fuzzy logic represent probabilities (e.g. "the particle is at (x, y, z)" has a probability of truth when the particle is too small to measure its exact position) or degrees of truth (e.g. "the bottle is full" is half-true when the bottle is half-full).