Any idea when we are supposed to be getting binary literals? Yes I know hex is easier to understand, but for some places I'd rather be looking at which bits are flipped. Ya know?
Some old compilers supported binary integer constants. For example from the documentation of one old compiler
|Constants beginning with the characters 0b are taken to be binary constants. Only the digits 0 and 1 are valid within a binary constant. For example 0b1101 is a binary integer constant,|
|Wasn't there a discussion somewhere as to why they were proposed and dropped from C?|
Dunno. If there was I haven't read it :P But, glad to see gcc supports it regardless.
Topic archived. No new replies allowed.