RE: List Of Way To Get Total Memory On Each Kind Of Computer

This forum is a continuation of http://www.cplusplus.com/forum/lounge/190436/

The biggest problem thou was a miscommunication of what I meant because I don't believe this topic gets mentioned a lot, so I will try to explain it.

So, each computer can have a different amount of ram, but the actual number of bits that have to be used for addressing that RAM will always be round_up(log2(max_pointer_value - min_pointer_value)) because the min_pointer_value can be added to the pointer address to get the real pointer address. However, the number of bits used for storing this number using conventional pointers will always be the data bus size. What this mean is, for example I have 8GB of ram in my x86_64 PC, and I am running an example application that uses a 1048576 (2^20) length array of pointers. In a perfect world, 8GB of RAM means the application will use at most 33 bits of each pointer for storing the address location, and thus the array will take up 4325376 bytes (a little over 4MB) of space (because only 33 bits will be used for each address, so each address crammed into a memory-efficient 33 bits per index (using bitwise operators) array). However (unfortunately), we do not live in a perfect world, so in actuality 64 bits will be used to store each pointer (even though 33 bits are the most that is ever used), and the array will actually take up 8388608 bytes (8MB). This inefficient waste may sound inconsequential, and that is true... for most applications. But, in the growing industry of 3d simulations, HUGE amounts of points have to be used and memory is scarce, so (if anything) it would be practical to go through the 'trouble' of using bitwise operators to store the pointers in a more compact array based on their maximum range of values, and even for general purpose applications, it only take a few extra cpu cycles to super save on memory.

So, now back to my main question. I want to go about implementing this super-memory-saver, so I would like to know which functions with which libraries and which linked files on every existent platform I could use get what the maximum address of memory used by the application and minimum address of memory used by the application.
This technique is called pointer compression. It's used very rarely -- I've only used it once, on a device with 1KiB of random-access memory.

In computer graphics, coordinates are almost universally stored in contiguous sections of memory. When those coordinates are manipulated by the GPU, they're not pointed to (because they're processed on the GPU where main memory pointers don't make sense) but indexed by arrays of integers called IBOs.

In the context of contiguous memory, pointers and indices are functionally identical. There is a type named std::ptrdiff_t that can be used as an array index.

For applications with large memory demands, it can be useful to allocate all memory up-front rather than as required. Programs that use this technique (often games) have a strict upper-bound on their memory usage, and handle their memory by constructing new objects in their memory pool as they are needed. C++ provides the custom allocator interface and placement new, partially for this purpose. In a pool system, pointers could be implemented as array indices into the pool.

In a system that doesn't use a memory pool, things are trickier. Virtual memory isn't always contiguous and the system kernel can do whatever it wants with it, including increase or decrease the amount of it on the fly.

I'd guess that you would have to hack around with the kernel's memory management facilities to get that information, and to make guarantees about the values you're looking for.

Obviously, on freestanding code there's no such issue, but the problem is that on most desktop platforms you don't have control over the system your application is running on.

My answer is that the minimum integer value pointer address is 0 and the maximum value pointer address as an integer is implementation defined.
Not sure where you're getting the idea that 8MB of memory is a lot, especially in 3D simulation. This isn't 1995. The bulk of memory use in 3D simulation is texture, vertex (or cloud point) data, sound, and so forth.

And again, just like the previous thread, there is no defined value for the maximum address space that a specific program can use. The best you can do is define a block of memory the program can use and guarantee yourself that you won't allocate outside of that block of memory.

Even then, you're really not saving memory while you're probably murdering the cache.
In the case where memory is contiguous, such as vectors and arrays, this is not useful, because the savings will be very insignificant.

But for linked data structures, it may be useful. Particularly it might be helpful for graphs, which is one of the types of data that can often be very large.

You should at least do a search on pointer compression in google scholar before you get too carried away, just to see what's already been done.

Here are a few papers and patents,
https://www.google.com/patents/US5276868
https://www.google.com/patents/US20090119321
http://llvm.org/pubs/2005-06-12-MSP-PointerComp.pdf
Thank you so much for your replies, you are all very very helpful.
Just because your computer has 8GB of physical memory doesn't mean that programs are limited to pointer values of 0 to 234-1. Most computers use virtual memory, so the 64 bit address that the program uses will point to who-knows-where in the physical memory.

Also, if you're going to create a class like this, I think it makes sense to have it work on application-specific memory. In other words, let the application define what it wants to use the class for. There's a good chance that the program knows that the array of pointers contains values between minVal and maxVal. Your class could use this to determine the number of bits required.
So, with the added factor of virtual memory, how could I know what the maximum and minimum values a pointer could be? Also, I will be the one that will be using this library.
It depends on your platform, I'd guess.

On my machine, for instance, I can increase the amount of available memory (swap space) on the fly by issuing the commands
1
2
# mkswap /dev/sdxn
# swapon /dev/sdxn 

Which, as you could imagine, might complicate things. Beyond that, I don't know that the mapping between virtual addresses and real memory is guaranteed at all.

Maybe you could use some variant of a radix trie to store your pointers.
http://www.cplusplus.com/forum/lounge/190436/#msg927785
pointer compression is an unusual technique that sacrifices time efficiency to gain space efficiency. This isn't a trade-off that's particularly useful in game programming. Usually you'd want the exact opposite.
64-bit user-mode [Windows] processes may allocate any region in the lower 8 TiB of the full address space, translating to a minimal pointer 44 bits long, or 6 bytes.

You're slowing down every pointer dereference to fit at most 25% more pointers in the same space. In practice you'll gain even less because most of a game's memory is used up by assets, not by pointers.
Topic archived. No new replies allowed.