Why should 64 bit development ever be chosen over 32 bit?

Pardon me if this is a silly question, but I want to make sure I'm clear on this subject.

64 bit generally allocates more memory to objects (Where it's 4 bytes in 32, it's 8 bytes in 64), thus wasting the extra memory*. 32 bit has the ability to be cross platform, whereas 64 bit cannot, so if you want backwards compatibility you'll have to write two versions of the same program.

*That is, unless you want larger objects.

The only advantage I see are using larger objects, however I often see vendors supply 32 and 64 bit versions of the same software with no noticeable differences when run in a 64 bit environment. I thought the grand goal was to make a single executable that works from a targeted platform and up.

My point/question:

- Is the extra space given to you by 64 bit ever useful enough for you to abandon backwards compatibility? Any example cases?

I suppose this subject spans beyond C or C++, but in general, what significant advantages are there over 32 bit that would make me want to use 64 bit? Why would I want to manage multiple builds, especially for larger projects?

Again, hope I don't sound silly. I don't want to make any mistakes or assumptions.
Last edited on
For the following I'm going to assume that when you say "32 bit" you're really saying x86, and when you say "64 bit" you're really saying "x86-64" or something equivalent.

64 bit generally allocates more memory to objects (Where it's 4 bytes in 32, it's 8 bytes in 64)
The way this statement is phrased, it's false at several levels.
Targeting x86-64 doesn't necessarily double the size of all objects. First, Windows compilers didn't scale the size of their types, so sizeof(int) == 4 on both Win32 and Win64. Second, the programmer may request from the compiler a specific number of bits for a given variable, rather than implementation-defined int, long, etc.
It is true that pointers will be larger, but whether this translates to significantly more memory usage depends on the particular application.

32 bit has the ability to be cross platform, whereas 64 bit cannot
Targeting x86 means that the compiler generates code that runs on x86 processors, and likewise for x86-64, so this statement is a bit absurd. What you meant to say is that an x86 executable for Windows can also be run on x86-64 installations of Windows (other operating systems don't do this that I'm aware, at least by default).
This is true, but depending on how the application has been written, targeting one or the other may be as simple as flipping a switch on the compiler.

if you want backwards compatibility you'll have to write two versions of the same program
Nah. There may be some parts that can't be written independently of the platform, but if the whole program has to be written twice, you're doing something wrong.

I thought the grand goal was to make a single executable that works from a targeted platform and up.
I would prefer an executable that takes advantage of all the features of my very expensive CPU, rather than one that will run on architectures that don't yet exist. Compilers will not suddenly vanish. We can always rebuild.

Besides the extra-large address space, x86-64 includes various SIMD instruction sets, which x86 processors may not include. A vectorizing compiler can simply generate instructions for those sets without having to insert branches for alternate code paths. x86-64 also has more registers, which some applications can benefit from. In other words, some applications will see a performance improvement simply by being compiled for x86-64.

x86-64 has been around for ten years. The number of x86 OS instances in the wild drops every day. If you want maximum compatibility, it's best to write code that can be retargeted easily (this, by the way, is probably a sensible strategy in general) and build for both platforms. If you don't particularly care, just target x86-64; it's the way of the future.
Last edited on
Thank you for your explainations! I hate to be the OP of a stupid question, but it was the case of the lack of proper explaination initially.

As for the various errors on my behalf, I believe I understand; sizes are subject to change, but as it currently stands neither Win32 or Win64 make any such changes (With the exception of pointers, which become larger given that more memory is available and they are addresses in that memory).

The minimum size of each type (Pointers excluded) remains the same, but depending on the platform it's limit can be higher or lower, correct? I believe my initial misunderstanding came from confusing those two concepts (Min and max). The study I've been taking also warns that going past the guaranteed minimum can be dangerous as it's results are undefined, though I guess in the context of this discussion you'll be fine if you stick to the constraints of your target platform.

Be honest with me, are any of these common misconceptions or was I completely on the wrong side of these concepts?

I do like the way you put it: "It's the way of the future", but I do worry about Grandma and her 2000's Windows XP from Walmart. Should I not?
The minimum size of each type (Pointers excluded) remains the same, but depending on the platform it's limit can be higher or lower, correct? I believe my initial misunderstanding came from confusing those two concepts (Min and max). The study I've been taking also warns that going past the guaranteed minimum can be dangerous as it's results are undefined, though I guess in the context of this discussion you'll be fine if you stick to the constraints of your target platform.
I don't understand what you mean.

I do like the way you put it: "It's the way of the future", but I do worry about Grandma and her 2000's Windows XP from Walmart. Should I not?
Should you also bend over backwards to make sure your code runs on 9x kernels? That was only a couple years before granny's computer. Or do we arbitrarily decide that 9x kernels are not supported? Is arbitrarily deciding that pre-Vista kernels are not supported that much worse?
> Why should 64 bit development ever be chosen over 32 bit?

Ideally the code that we write should be portable, and we should leave the choice of 32-bit or 64-bit to the user.

For instance, slimjet build of chromium:

32-bit Slimjet web browser is recommended for both 32-bit and 64-bit Windows. If you want to go 64-bit at the cost of more memory usage, you can get 64-bit Slimjet for Windows. http://www.slimjet.com/en/dlpage.php


Slimjet for Windows 64-bit

64-bit version uses significantly more memory than 32-bit version. It has some stability and performance improvements which are not quite noticable for most people. We do not recommend it for machines with under 8GB memory or nerves sensitive to the memory usage of applications. If your computer has affluent memory and you want to screeze the last bit of performance out of Slimjet, go for it by all means. http://www.slimjet.com/en/dlpage_win64.php
32-bit can be faster on 64-but architecture but it is more a question of if the programmer still code in vc7-8-9 or and old platform. If your application is using alignment for 32 it is fair easy to switch to 64.
Last edited on
Topic archived. No new replies allowed.