Why is 64 bits the largest primitive

Hi guys,

So I could be wrong, I'm guilty of using mostly C++98 mainly when I program. I have read that you can indeed have larger primitives than 64 bits(I think)

but let's take c++98 as the center of the thread, so the largest primitive you can have is a long long nameOfData, or unsigned long long nameOfData for the largest number

so why did the good folks who implemented the compilers agree on 64 bits(8 bytes)?

this article ( https://stackoverflow.com/questions/23038451/how-does-a-32-bit-processor-support-64-bit-integers ) explains simply how a processor can handle larger numbers(numbers bigger than it's word size) so a 32 bit processor can add 64 bit numbers but it will require extra instructions.

So with that being said, why stop at 64 bits? why not 128 bits, or more boldly 256 bits??

I'm sure there probably has to be some good reasons for it.
Last edited on
the answer is hardware.
down in the CPU, for a long long time, most PC and office computers (not servers or big special purpose stuff) were 32 bit (1990 to about 2010). Around 2010 64 bit came long, allowing MUCH more ram to be addressed (pointers were now 64 bit; on 32 bit systems the limit was 4GB of RAM!!!) and larger integers to be processed inside a SINGLE CPU register.

There are machines with bigger registers, but its not mainstream right now and the compilers for those deal with it via language extensions. An old example of this was that visual studio used to support a 10-byte double, because the hardware supported it (it helps control round-off in the hardware).

You can use a library to get larger integers. I would argue that c++ may want to add one of these to its toolbox, but for now, its not there. Until then, though, its the 'word size' of the current mainstream hardware, and that, is 64 bits.

Anything bigger than the cpu can handle impacts performance; python slows to a crawl doing simple math because of this (it handles integers as a container of bytes). There are some good libs for C++ for it but its still a lot slower than using the word-sized ints.

another issue is practical use. 64 bit ints are HUGE. There are not a lot of things that need bigger, encryption is one of them, but beyond that, its rather uncommon to need so much. Granted, almost every computer does a fair bit of encryption every day.
Last edited on
Anything bigger than the cpu can handle impacts performance; python slows to a crawl doing simple math because of this (it handles integers as a container of bytes). There are some good libs for C++ for it but its still a lot slower than using the word-sized ints.


that makes sense, but if for some strange reason, the C++ standard committee met and decided to introduce a 128 integer and a 256 integer respectively, would this be possible? (not taking into account its feasibility)

I mean the CPU (64 bit in most cases), I'm sure could certainly handle adding 256 bits by breaking it down into 4 - 64 bit chunks and allowing the carry out to be input to the carry in of the later 192 bytes, but would the CPU be able to work with other operations on the 256 bit values such as 256 bit pointers( could the language somehow make this work internally)

side question and on the same note how does a 32 bit machine handle a pointer to a 64 bit long long? (I feel like I'm confusing something extremely basic here)

thanks Jonnin :)

Last edited on
There's arithmetic libraries for fixed-length integers, if you want to try how it would run. Like jonnin said, beyond encryption there's not many applications that need such huge numbers. For reference, if you used a 256-bit integer to represent a position in space, you could represent any position in the observable universe with a precision of 10-48 millimeters. That's about an undecillionth the charge radius of a proton.
Last edited on
encryption libraries already do all kinds of math on large integers, so yes, it is possible.
I can see it now: the long long long type.
remember that to have 256 bit pointers that mean anything, you need hardware across the board, not just cpu registers, but ram would need to be changed to address that much. We are not even close to consuming 2 to the 64 bytes of ram, or being able to afford or install that much. Until we are, even 2 to the 65th is not useful for a pointer.

a terabyte of memory is ~1e12 bytes.
2^64 is 1.8e19
no PC has 1.8e19 bytes of memory installed. I doubt any servers do, either -- a fast glance google said 256TB is upper limit. that is only ~2.56e14 ... a long long (pun!) way from e18

we can dream. If we get this kind of ram, we can hash table anything (ok, not anything, but much more than we can today) and do a LOT of problems in O(1) that currently are not this quick :P Interestingly we may be getting closer. SSD is just a gnats hair from abstraction of the 'disk drive' away to making 'everything ram'. If you plugged it all in right, there may be a way to get ramming speed out of SSDs sooner or later, and it all mush together into a wonderful huge puddle of bytes that we can use any which way by addressing 128 or 256 bits worth. But that is not today.
Last edited on
That's very true I was under a misunderstanding, even a char pointer is 8 bytes on a 64 bit system using a 64 bit compiler, so 128 or 256 pointer isn't possible, and as you mentioned 2^64 is gargantuan, so the chances of having a 128 bit pointer isn't really useful on modern day (home use) computers.

As mentioned before 32 bit computers using a 32 bit compiler can actually work with 64 bit numbers(u_int64) but it will require more instructions and really isn't performance friendly especially if your application relies heavily on performance but technically it would be possible for a compiler(not an external library) to implement a 128 bit or even 256 number right? I mean it probably would require a lot more instructions and again the performance would suck and especially since C++ or even C are known to be performance driven languages it's probably not optimal BUT in theory compilers could implement a 128 or 256 bit number right? and again not that they would as Helios mentioned there isn't much need for 256 bit numbers besides from mathematical calculations and in encryption.
Last edited on
yes, many programs use larger than 64 bit integers. Its not hard to make it work, its just slower than if you had hardware support for it. When I started coding, most PC did not have floating point hardware, they just had to do it with instructions and work with the integer registers and tools... but they did floating point that way for over a decade.

if you need it you need it, and if its slower, that is how it is. C++ or C are still equal or faster than other languages that do the same thing; its not like python has hardware that C++ does not ...
Last edited on
Topic archived. No new replies allowed.