difference between 1 and 01

Is there any difference between the two? For example, can I assign the value "1" instead of "01" to a variable? Or is the user allowed to enter "03" instead of "3" in the console window? Any difference according to the standard?

Thanks!
A literal with a preceding 0 is treated as an octal number, so if you do x=010;, x will hold the value 8.
This isn't true when reading an integer from a stream, though.
So in the case x= 010, whatever x's type is, it will be assigned 8?
010 is an integer literal and equal to 8.
Last edited on
I see. Thank you Athar!
How about hexa and such ?
Hexadecimal numbers start with 0x, so x=0x8;
... and there's no standard method for straight binary, so again you will use hex (but it's OK because they convert really easily).
But what the point of assign a value to int like int f(8) or intf(010) they all have same value but why should I use it ?
When you enter a number, the compiler only sees its binary equivant as that is one of the first things to be turned into code.

Example, lets say I write the number "13". The compiler will recognize that as: 0000 1101 in binary.

If I write the number 015 the compiler will recognize that as octal and will see it as 0000 1101 in binary.

If I write the number 0xd, the compiler will recognize that as hex and will see it as 0000 1101.

These are all exactley the same and do not make a difference which ones you use.

Why would you use one over the other? If I am interested in the number itself, I'll use decimal. If I am packing each individual bit to represent a discrete option for a function, I'll use hex as hex lets me recognize which bits are where. I rarely use octal unless I am dealing with a strange standard when interfacing a module.

Example: I want to make a function that takes a series of options in one integer. I would do the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#define OPTION1 0x00000001
#define OPTION2 0x00000002
#define OPTION3 0x00000004
...
#define OPTION12 0x00000800

void myFunction ( unsigned int MyOptions);

int main()
{
  myFunction (OPTION1 | OPTION3); //The | operator lets us combine the bits
  return 0;
}

void myFunction (unsigned int MyOptions)
{ // The & operator lets us mask the bits to see if it's set.
  if (MyOptions & OPTION1) ... do something 
  if (MyOptions & OPTION2) ... do something else
}


Here we can clearly see that OPTION1 | OPTION12 will create 0x00000801 which shows that 2 bits have been set. This would be harder to see in decimal.
Last edited on
Be careful.
What you write in your source code != what the user types at the console.

You need to play with the iostream flags to get the same behavior as you do when compiling source code, otherwise anything with leading zeros are (by default) treated as decimal.
Topic archived. No new replies allowed.