datatype size

Hello,

I am new to c++ but have a lot of programming experience in Java. In c++ the size of int can be variable depending on the compiler and library. It has to be at least as large as short and the maximum limit is at long. So there is no guarantee that it is 4 bytes.

My question is if I am developing an application in windows and using windows libraries, how could I be sure that when I run my code in Linux it behaves as I expect and the size of int and other data types remain same?

Thanks,
http://en.cppreference.com/w/cpp/types/integer

1
2
3
4
5
#include <cstdint>

std::uintmax_t factorial( std::int_fast32_t number ) ;

// etc 

Thanks JLBorges!

But this is for C++11, what if I am using c++98?
On the platform you are running, you can check the value of the INT_MAX constant and many others which are in <climits> or <limits.h> header. Or, you can check the sizeof(datatype). Hope this help!
I don't mean to hijack your thread, but can I ask JLBorges what the fast32_t int do? I have used uint64_t before.

I couldn't find anything on wiki or google

Doesn't it say on the linked page? It maps to the type of the fastest signed integer that is at least 32 bits long.
What I meant was, how can an one type of int be faster than another type?
Have a look at stdint.h. The new C++ constants are based on these.
I will start my own thread
how can an one type of int be faster than another type?
It may be faster to use the machine word, rather than using smaller units for integral types. It's implementation defined, and those types are in stdint.h and apparently now in C++.

Here's an extract from stdint.h from Xcode 4.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
/* 7.18.1.2 Minimum-width integer types */
typedef int8_t           int_least8_t;
typedef int16_t         int_least16_t;
typedef int32_t         int_least32_t;
typedef int64_t         int_least64_t;
typedef uint8_t         uint_least8_t;
typedef uint16_t       uint_least16_t;
typedef uint32_t       uint_least32_t;
typedef uint64_t       uint_least64_t;

/* 7.18.1.3 Fastest-width integer types */
typedef int8_t            int_fast8_t;
typedef int16_t          int_fast16_t;
typedef int32_t          int_fast32_t;
typedef int64_t          int_fast64_t;
typedef uint8_t          uint_fast8_t;
typedef uint16_t        uint_fast16_t;
typedef uint32_t        uint_fast32_t;
typedef uint64_t        uint_fast64_t;


And this is the same thing from GCC 4.5.3 on Cygwin.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
/* Minimum-width integer types */
typedef signed char int_least8_t;
typedef short int_least16_t;
typedef int int_least32_t;
typedef long long int_least64_t;

typedef unsigned char uint_least8_t;
typedef unsigned short uint_least16_t;
typedef unsigned int uint_least32_t;
typedef unsigned long long uint_least64_t;

/* Fastest minimum-width integer types */
typedef signed char int_fast8_t;
typedef int int_fast16_t;
typedef int int_fast32_t;
typedef long long int_fast64_t;

typedef unsigned char uint_fast8_t;
typedef unsigned int uint_fast16_t;
typedef unsigned int uint_fast32_t;
typedef unsigned long long uint_fast64_t;
Note that in this environment, it's faster to do 16bit arithmetic in 32bits, but no in the environment above.
Last edited on
thanks for your help kbw.

I had started a new thread, so I didn't hijack this one. And has been solved in the other one. Your comment solves it as well. Good work !!!!!

It may be faster to use the machine word, rather than using smaller units for integral types
Topic archived. No new replies allowed.