hello

i want to check if the last two bits of a number a are 1. am i going to do

i want to check if the last two bits of a number a are 1. am i going to do

`if`(a&3==3)

? (because in binary 3 is written as 11)
That looks correct.

One way of looking at the bitwise & operator, is a method of setting bits**off**.

So we take the binary value 00000011, then take the result of that & your variable, all the bits where you used a 0 will be set**off** in the result, and the bits where you had a 1 will retain the original value.

So the result of (a & 3) will have a zero in every position except the last two bits. You test if that == 3 to confirm that both bits are a 1.

One way of looking at the bitwise & operator, is a method of setting bits

So we take the binary value 00000011, then take the result of that & your variable, all the bits where you used a 0 will be set

So the result of (a & 3) will have a zero in every position except the last two bits. You test if that == 3 to confirm that both bits are a 1.

Hey guys I have a question.

I don't know the bit sort rule of negative and float-double value. And why I can't perform (2.4 % 5) - modulus a float-double value or execute any bitwise operator command?

I don't know the bit sort rule of negative and float-double value. And why I can't perform (2.4 % 5) - modulus a float-double value or execute any bitwise operator command?

Because those operations apply only to integers.

If you want to use such operations assign your value to an integer variable, or cast it to an integer type.

If you want to use such operations assign your value to an integer variable, or cast it to an integer type.

But, why? And why...C++ doesn't support **unsigned float**, **short float**, and **long long int**???

C++11 has *long long int*.

I don't think*unsigned float* would be as useful as unsigned integers, and there are not much hardware support for it.

You already have*float*, *double* and *long double* so we don't need yet another type. If C++ added a *short float* it would probably be the same size as *float* on most implementations because making it smaller would just slow things down because there is no hardware support for such 2 byte floats.

I don't think

You already have

Jackson Marie wrote: |
---|

But, why? |

The internal representation of floating point numbers is optimised to match the hardware. The number will probably have separate sign, mantissa and exponent. Attempting a direct bitwise operation wouldn't make sense, as the three separate parts could be mixed up.

The point about using various bitwise operations on integer values is that the hardware and machine language is able to directly perform the operation, making it very fast and efficient. There are no such hardware equivalents for floating point values.

Topic archived. No new replies allowed.