Variable ranges

Why are the ranges for variable types different on other systems? Are there any types that are guaranteed to never change regardless of platform maybe like a char?

Thanks
variable ranges are defined by the length of the type in bytes.
they change because when you compile a program, types are changed to assembly types:
int becomes WORD
long int becomes DWORD
and other types are transformed into other types.
the length of a WORD is defined by the architecture of the CPU and operating system:
if they're 32bit systems, a WORD is 4bytes long and hence an int is 4bytes long.
if the architecture is 64bit, then the WORD is 64bit = 8bytes.
there are 16bit systems, but i think those are ancient now, even mobile phones are at least 32bit platforms.
the only type that is guaranteed to never change is the char which is guaranteed to always be 1byte long.
the bool type might be translated to one byte for compatibility with hardware.


you can try reading an introduction on assembly language, it can greatly help you understanding this.
So the other types like long and float/double etc are guaranteed to not be the same on other platforms even a short?

How does type modifier affect another type such as int? I.e if I cout'ed sizeof(short int) it returns 2 on my platform. Does that mean that the short prefix 'minuses 2 bytes from an int' Does that make sense?
type modifiers are not equation applied on the length of a type.
for example: i was shocked to know that VS2010 defines :
double is the same as long double.
what the book exactly said:
The ISO/IEC standard for C++ also defines the long double floating - point type, which in Visual
C++ 2010, is implemented with the same range and precision as type double . With some compilers,
long double corresponds to a 16 - byte floating - point value with a much greater range and precision
than type double .
type		size	range of values
double		8      ± 1.7 × 10  ± 308  with approximately 15 digits accuracy  
long double	8      ± 1.7 × 10  ± 308  with approximately 15 digits accuracy 

this is for 32bit platform.

this means that the compiler transforms both types to DWORD.
the compiler decides what type to transform to what type.
the compiler will "most likely" transform int to WORD, you can use sizeof to avoid unwanted bugs.
So ultimately it's up to the compiler then?
both compiler and architecture of CPU.

@Rechard3
this is for 32bit platform.
this means that the compiler transforms both types to DWORD.


(I guess it's a typo - in above example double are stored in memory as QWORD (8 bytes), instead of DWORD).

Generally speaking: there are a lot of hardware (8-bit 8051, 16-bit 8086, 32bit IA32, etc.). When we're going to write HLL compiler for new hardware we generally have two ways:

1. Make all types the same on all platforms. Problem: let's say our target CPU is 8-bit only, but we want to have 32-bit integer. We need emulate it in software (by joining four 8-bit machine words), which makes things more complicated.

2. Keep types to fit hardware.
So for example, on 8-bit 8051 int has 8-bit, when on IA32 has 32-bit etc.
Advantage: we don't need any software emulation.
Disadvantage: some code cannot be ported from one platform to another directly.

Most compiler makers decide to go into [2] to keep things simply. Hovewer, software emulation is sometimes still in use e.g. to emulate floating point types (float, double in C) on hardware witouth FPU.
Last edited on
Topic archived. No new replies allowed.