64-bit Ints Taking Toll on 32-bit Performance

I'm writing an application which makes extensive use of 64-bit integers for mathematical operations. I've consistently found through profiling that, on a 32-bit system, a large amount of resources are being consumed by a function called "aulldvrm." Google returned little on this function, besides the fact that it has something to do with 64-bit ints. I'm assuming it's used to make 64-bit ints compatible with a 32-bit architecture. (On a 64-bit system with a 64-bit binary, performance is *much* better.)

Now, I'm using 64-bit ints for their size, not just for the potential performance benefit they give on a 64-bit enabled processor. I need every bit of storage they offer, (No pun intended) so I can't really just do away with them. Is there any way I can optimize these 64-bit integers for running on a 32-bit architecture?
Last edited on
Have you tried out GMP library for bigmath operations?
I'd rather avoid external libraries if possible. Sixty-four bits is all the storage I need, anything more would be adding unnecessary complexity.

If it turns out GMP would solve my problem, and it's the only good option, I'll take a look at it. If there's any other way, I'd rather stay away from libraries.

EDIT: I just read that it might not work on Win64 platforms, so that makes it even less attractive for this particular application.
Last edited on
Well, aulldvrm is an internal 64-bit divider function (lldvrm = long long divide remainder; this function
returns both a / b and a % b, since a % b has to be computed anyway).

Your only realistic choices are:

1) Optimize your math operations on 64-bit numbers to avoid division;
2) Turn on compiler optimizations and hope that helps;
3) Use an external library;

Can you post the part of the code that is doing the divisons/mods? or a general algorithm?
Last edited on
Topic archived. No new replies allowed.