Hi there today I was reading my book and I couldn't understand this
A value, such as 42, is known as a literal because its value self-evident. Every literal has a type. The form and value of a literal determine its type. Integer and Floating-Point Literals We can write an integer literal using decimal, octal, or hexadecimal notation. Integer literals that begin with 0 (zero) are interpreted as octal. Those that begin with either 0x or 0X are interpreted as hexadecimal. For example, we can write the value 20 in any of the following three ways: Click here to view code image 20 /* decimal */ 024 /* octal */ 0x14 /* hexadecimal */ The type of an integer literal depends on its value and notation. By default, decimal literals are signed whereas octal and hexadecimal literals can be either signed or unsigned types. A decimal literal has the smallest type of int, long, or long long (i.e., the first type in this list) in which the literal’s value fits. Octal and hexadecimal literals have the smallest type of int, unsigned int, long, unsigned long, long long, or unsigned long long in which the literal’s value fits. It is an error to use a literal that is too large to fit in the largest related type. There are no literals of type short. We’ll see in Table 2.2 (p. 40) that we can override these defaults by using a suffix
That's sloppy terminology: 42 is an expression, specifically, a literal expression, which evaluates to an object of type int whose value is 42.
To compare, 40 + 2 is an expression which evaluates to the same thing, but it isn't a literal expression (it consist of an operator whose two operands are literal expressions)
To compare, 40 + 2 is an expression which evaluates to the same thing, but it isn't a literal expression (it consist of an operator whose two operands are literal expressions)
40 + 2 is a preprocessing expression that is substituted for integer literal 42 by the preprocessor. That is the compiler will not create objects for 40 and 2. It will see for example the following code that is instead of
This simply means that integer constants can be represented in different notations in C++: in decimal notation, in octal notation and in hexadecimal notation.
No, it is not a "preprocessing expression" (there is no such thing). You probably meant to say "constant expression"
You are right. But for example C contains several terms as a preprocessing token, a preprocessing number, a constant integer token and so on. So it is easy to mess these terms.
in C++, you don't have to, the language handles all these conversions.
what you mean is: can i output an integer in a hexadecimal form?
answer is : yes you can, just include <iomanip>, and use the hex manipulator.
1 2 3 4 5 6 7 8 9 10 11 12 13
#include <iomanip>
#include <iostream>
// . . .
using std::cout;
using std::hex;
using std::endl;
// . . .
int x = 45;
cout<<hex<<x<<endl;
// . . .
Hexadecimal is a base-16 number representation in the same way decimal is base-10. The only real difference (asides from the base) is the radix (0-9 for decimal and 0-F for hexadecimal). An example of both might clear things up for you.
For decimal, each digit appearing to the left of the decimal point represents a value between 0 and 9 times an increasing power of 10. Digits appearing to the right of the decimal point represent a value between 0 and 9 times an increasing negative power of 10.
For example, the value 123.456 means:
1*10^2 + 2*10^1 + 3*10^0 + 4*10^-1 + 5*10^-2 + 6*10^-3
Each hexadecimal digit to the left of the hexadecimal point represents a value between (0-15) to the power 16.
For example, the hexadecimal value 1234 corresponds to
1 * 163 + 2 * 162 + 3 * 161 + 4 * 160
or
4096 + 512 + 48 + 4 = 4660.
The radix is (0-15) but there are only 10 (0-9) decimal digits, so we need to invent 6 additional digits to represent values (10-15) in hexadecimal. Rather than new symbols, we use the letter (A-F).