### bit fields

struct marks{
int p:3;
int c:3;
int m:2;
};
void main(){
struct marks s={2,-6,5};
printf("%d %d %d",s.p,s.c,s.m);
}

acoording to me output should be 2,-6,5

bt output is 2,2,1;

why explain;
It is simply explained if to use binary notation

For example 5 is represented as

`0000 0101`
You reserved only two bits for m. So if extract two bits from the representation above you will give 1
 ``12`` `````` 0000 0101 01``````

The same valid and for -6

The problem is that 1) 3 bits (int c:3) can not represent negative value -6. 2) for bit fields it is implementation defined whether int denotes unsigned int or signed int.
Last edited on
You need at least 4 bits to represent the value -6, which would be represented as 1010. The lower 3 bits are the value 2. If you changed the bitfield length from 3 bits to 4, I don't know if the print statement would give you -6 or 10. You should play with that to find out.

You need 3 bits to represent the value 5, which would be represented as 101. The lower 2 bits are the value 1.
btw, what's colon operator use for on this case? i never seen struct definition like that...
@chipp
It's called bit fields. It specifies how many bits the variables should have.
so the size of `int` is ignored? am i right?
 @chipp so the size of int is ignored? am i right?

From the C++ standard

 "The sizeof operator shall not be applied to ... an lvalue that designates a bit-field."
@vald
but we know that bit fields are unsigned
and for -6
we can use 110
as msb is 1 then or we can say that msb is on it means the

decimal value of this binary pattern will be negative.......

but when I use 3 bits for -6 then then it did not give correct ans
I dont know the reason why?
If you know then explain it to me?
The represantation of -6 looks the following way

`1111 1010`
You defined c as

int c:3;

So you are extracting 3 bits

 ``12`` ``````1111 1010 010``````

that are equal to 2. You need one more bit for sign bit provided that int denotes signed int. Otherwise (if int denotes unsigned int) you will get 10 instead of -6.
Last edited on
 The sizeof operator shall not be applied to ... an lvalue that designates a bit-field.

i'm not talking about `sizeof` operator, but what i'm talking about is the actual size of `int`. does it decreased from 32-bit to 3-bit in this example? regardless whether `sizeof` operator can be applied to the related (bit-field) variable or not...
effectively, yes. The p field is a 3-bit int, the c field is a 3-bit int, and the m field is a 2-bit int.
struct A
{
int a:4;
int b:5;
};
typedef struct A A;
int main(void)
{
A x;
x.a=-5
printf("%d",x.a);
while(!kbhit());
return 0;
}
output is -5
as we know binary pattern of -ve number is its twos complement

binary pattern of 5=0000 0000 0000 0101

ones complement of 5=1111 1111 1111 1010

twos complement of 5=1111 1111 1111 1011

when we extract 4 bits of twos complement pattern then it is 11then why ans is -5
Last edited on
First of all, I think this is a good reason to not use signed values in bitfields. There are lots of applications where non-negative values used in bitfields are really handy, but this confusion makes signed values less appealing to use.

 when we extract 4 bits of twos complement pattern then it is 11 then why ans is -5

When you extract 4 bits of -5, you have 1011, not 11. I think that's where your misunderstanding is.

When converting from a smaller signed int value to a larger signed int value (for instance short to long), the most significant bit is replicated into the extra bits of the lager value. So, going from -5 (8-bits) to -5 (16 bits) is:

 ``12`` `````` 1111 1011 1111 1111 1111 1011``````

In your case, in order to print X.a as an integer (presumably 16-bit), the computer is making the following conversion:

 ``12`` `````` 1011 1111 1111 1111 1011``````

Because the upper bit of the 4-bit integer is 1, it gets replicated to the rest of the integer.
> First of all, I think this is a good reason to not use signed values in bitfields.

It is ok to use signed values in bit-fields provided it is explicitly specified that the bit-field is signed. There is no default-int-is-signed-int etc. for bit-fields.

 ``12345678910111213141516171819202122`` ``````#include struct A { A( int v = -1 ) : a(v), b(v), c(v), d(v), e(v), f(v) {} int a ; // default: signed int signed int b ; // signed int unsigned int c ; // unsigned int signed int d : 12 ; // signed bit-field unsigned int e : 12 ; // unsigned bit-field int f : 12 ; // whether f is signed or unsigned is implementation-defined }; int main() { A a ; // on my implementation std::cout << a.a << ' ' << a.b << ' ' << a.c << '\n' // -1 -1 4294967295 << a.d << ' ' << a.e << ' ' << a.e << '\n' ; // -1 4095 4095 }``````
 When you extract 4 bits of -5, you have 1011, not 11. I think that's where your misunderstanding is.

i think what he means is, 11 in output (1011 = (2 power 3 = 8) + (2 power 1 = 2) + (2 power 0 = 1), which is 11)

When converting from a smaller signed int value to a larger signed int value (for instance short to long), the most significant bit is replicated into the extra bits of the lager value. So, going from -5 (8-bits) to -5 (16 bits) is:

 ``12`` `````` 1111 1011 1111 1111 1111 1011``````

wasn't it should be:

 ``12`` `````` 1111 1011 0000 0000 1111 1011``````

CMIIW
Last edited on
Other than the fact that I showed going from an 8-bit signed int to a 16-bit signed int (rather than the stated 16-bit to 32-bit) I showed what I meant.

-5 in a signed char would be 1111 1011.
-5 in a signed short would be 1111 1111 1111 1011

The sign bit is replicated to the more significant bits in the larger number (I forget the name for how this happens)
Topic archived. No new replies allowed.