size_t - Why is the size of the largest data object limited?

I was looking into the mysterious usage of size_t type which I understand to be a unsigned int.

Excerpt from http://www.embedded.com/electronics-blogs/programming-pointers/4026076/Why-size-t-matters


Although the size of an int varies among C implementations, on any given implementation int objects are always the same size as unsigned int objects. Thus, passing an unsigned int argument is always the same cost as passing an int.

Using unsigned int as the parameter type, as in:

void *memcpy(void *s1, void const *s2, unsigned int n);
works just dandy on any platform in which an sunsigned int can represent the size of the largest data object. This is generally the case on any platform in which integers and pointers have the same size, such as IP16, in which both integers and pointers occupy 16 bits, or IP32, in which both occupy 32 bits. (See the sidebar on C data model notation.)


Is this implying that a 16-bit system can only have an object of size 65 535 bytes? Why?
Last edited on
Because that's the largest RAM that can be attached to the system. This is also why 32 bit machines were limited to 4GB of RAM, and size_t was limited to 4294967296.

Start thinking about the hardware and stuff like this becomes obvious.
I'm sorry, I do not have a working knowledge of hardware and RAM. Why is the largest RAM limited by the 'bit' of the system? How can memory in general be limited by the 'bit' of the system? I.e. your harddrive can go from 100GB to 2TB.
a 16-bit system can only have an object of size 65 535 bytes? Why?

Even though you can install and use a lot more than 64K RAM on a 16-bit system, an object (formally, object representation, which is what sizeof measures) has to occupy a contiguous sequence of memory addresses. With 16-bit CPU registers used to store those addresses, you simply can't use pointer arithmetic past that boundary - it wraps to zero. A more telling wiki page is probably https://en.wikipedia.org/wiki/X86_memory_segmentation
Last edited on
I know xeef has given you some wiki links but I'll try to answer your specific questions:

1). Why is the largest RAM limited by the 'bit' of the system?

The system addresses the RAM using instructions, these instructions must fit into the system's memory map. The system also needs to be able to address RAM to the byte level, thus the largest area of RAM that can be addresses by the system is determined by the largest memory instruction the system can handle.

So, a 16-bit processor would issue 16-bit memory instructions, a 32-bit processor would issue 32-bit memory instructions and a 64-bit processor would issue 64-bit memory instructions.

2). How can memory in general be limited by the 'bit' of the system? I.e. your harddrive can go from 100GB to 2TB.

"Memory" in general doesn't exist. You have directly addressable memory, and attached peripherals. Hard drives are peripherals and use block storage, they store things in blocks and are not limited to 32-bit memory instructions. That's what drivers and Operating systems are for.

Another important issue is that harddrives are not connected directly to the processor but are attached through a drive controller which must fit into the memory map.

A neat experiment is to go grab an old 32-bit machine and install 4Gb of RAM, put Windows on it and ask yourself why only 3.7 (ish) is available. That because the peripherals have taken some address space.

If you don't have a good knowledge of how hardware works then you should do some reading. I'm not sure if this is a good place to ask questions, but if you like you can pick my brain via private message(I design chips for a living and can probably answer all of your hardware questions).

And the perfect wiki to start your reading on: http://en.wikipedia.org/wiki/RAM_limit
Last edited on
Thank you all for the replies xeef, cubbi and especially ValliusDax.

I have a feel for it now, and I really appreciate the links!
Topic archived. No new replies allowed.