Is 64 bits enough?

I'm sure you all know that a CPU with a 32-bit address space has a limit of 232 bytes, meaning it can only address 4 GiB RAM. With PAE on x86 this goes up to 236 = 64 GiB but 64-bit processors increase it further to a total of 264 = 16 EiB, which is more than 16 million terrabytes. At the moment, this is an unimaginably large quantity. Current estimates are that humans have produced about 3 ZiB of data in total, which means that one system with 16 EiB of RAM could store about 0.0(3)% of that -- and that would be volatile system memory: with a powerful enough processor you could operate on 0.0(3)% of all of humanity's data. I'm sure the NSA and the data-mining companies Facebook sells our personal details to would love that.

My question for this topic is, is that enough for good? Will we, one day, want a 128-bit processor? There isn't even a word for how much data you could address with a 128-bit address bus. The next thing up is 1 Googolbyte (which would actually require 333 bits to address)... while I'm at it, will we ever see a 1 GgB hard drive ([edit] assuming we're still using hard drives then, which is unlikely)?

[edit] According to Moore's Law the average size of a storage device doubles about every 18 months. With that rate of growth we would see a 1 GgB storage medium in just under 500 years (log2(10^100) * 1.5 years = 498 years).
Last edited on
Considering that 2^64 is more seconds than the expected lifetime of the universe, I think we won't need more than 64bit for anything pertaining to time or storage. For encryption though, we may want 128bit.
Last edited on
Human law, nothing is ever enough. Or can somebody point to a case where someone said "good enough, I'll stop here" and stuck to it without ever eventually trying to do better and not dieing before they got the chance.
admkrk wrote:
Or can somebody point to a case where someone said "good enough, I'll stop here" and stuck to it without ever eventually trying to do better and not dieing before they got the chance.
School assignments?
It's not hard at all to come up with problems that computer scientists are interested in solving that require more memory than we will, probably, ever be able to make computers with enough addresses for.

One example is the problem of building the graph of the probability of a subset of variables being true, given another subset of those variables is true. This graph has 2^n nodes. And ((2^n)(2^(n − 1))/2 edges. So with 128 bytes, you could just barely build the graph with only about 50 variables, assuming each edge takes one address, and each node one address.

Scientists would like to do this with millions of variables if they could. So, theoretically, assuming computers continue to get faster and faster, we will need more and more addresses.

But it really depends how much faster computers get. But keep in mind we are talking about how fast the fastest super-computers get.

Last edited on
I maybe wrong here but by 64 bits we just mean the address space in the virtual memory. The actual physical memory is much much less than this.

So program performance depends on the number of cache hits/misses rather than the number of virtual addresses available to us.
I'll say this about the hard drives, no. Can you imagine what the seek time\cache latency would be on a drive that size? It would never make it into mass production. If it did, no one would buy it. The only systems that would "need" the kind of capacity would be servers, and in that case you can't just buy one of them because you have to mirror them for redundancy. Have fun restoring that array when one of them croaks.
Couldn't we have said the exact same thing 20 years ago about 4 TB drives? 4 TB drives have a little over a million times the capacity of the earliest drives, and a little less than a millionth of 2^64 bytes.
Well, as file compression gets smaller and hard drives get bigger, I can picture a scenario where 1 EiB is a bit much for 99% of people. In the past, 1GB would not hold a quality video for anything even though that was once a pretty good sized drive.

However, I'm sure we'll find a use for it. Maybe 3D environments where we use point cloud data for amazing visuals and detail will become prevalent which would use huge amounts of data. Although, I personally know little about the subject... I'm sure there are other directions that can be taken.
Last edited on
I guess my post was assuming that things remained the way they are. As helios pointed out drastic changes are bound to happen as they did in the past.

Right now most commercially available HDD's come in two flavors, spinning drives and solid state. With spinning drives you are relaying on a stepper motor for accessing data. As with all mechanical components like this there will be a margin of error that has to be corrected for with another pass on the platter. This is why as the "traces" (<-term?) on the platter get more narrow and therefore more dense, the seek time noticeably increases. The 4TB drives available now are impressive, but with a seek-time of 12 ms they are a ways away from being useful to end users.

Maybe we can cheat our way around this by stacking up more platters or adding more read heads, but a new form factor is probably the more realistic solution.
The 4TB drives available now are impressive, but with a seek-time of 12 ms they are a ways away from being useful to end users.
Most people don't seem to have any problems with them. Random seek times are only relevant if you're performing more than one IO-intensive operation, which isn't the case in your average desktop.
The real speed problem we're going to face way before HDD seek times become an issue is a step up in the memory hierarchy. RAM is speeding up asymptotically slower than CPUs. In recent years, almost every development in CPU optimization has involved not accessing RAM. Increasingly, cache behavior is dominating the performance of all applications. If they don't fix that, speeding up components lower in the hierarchy is entirely pointless.
I'm not sure what you mean by "average desktop". Historically the market favors business PC's and with so much data flowing back and forth between geographically distinct sites disk queue times are a noticeable hindrance on every non-dummy terminal. Now granted, most of that delay is due to SATA still being tied down to the south bridge but that's being phased out (PCH) so seek time is coming back as a concern in the next year or so. Maybe they could cheat this limitation with a combination of a large + fast enough HDD cache and some OS pre-caching features but that seems like a crutch to me.
Last edited on
with so much data flowing back and forth between geographically distinct sites disk queue times are a noticeable hindrance
Huh? Could you clarify this statement?
south bridge but that's being phased out (PCH)

hadn't heard about this. Any word on the performance vs south bridge models?
@ helios: Take something like an Email server for a medium sized company. You're talking about thousands of messages a day ideally at only a few KB apiece but you know that everybody has to forward the previous messages in their entirety ... in HTML format, with attachments and they have to maintain the signatures of every single message ... in bitmap non the less ... I'm exaggerating of course. The point that I'm trying to make is that target market for this kind of capacity has high throughput as well as high capacity. Capacity can be achieved by spanning drives but you can't fake speed (except in the ways I listed above). I know that I'm back-pedelling here but in hindsight I realize that you're talking about end users and I'm talking about servers, so I'll go ahead and recognize the derailment here. Cheers.

@ Cheraphy: I wish I had a side by side comparisons. I've only read a couple of second hand reviews that, as usual, feel like more hype then fact.
Last edited on

Maybe we can cheat our way around this by stacking up more platters or adding more read heads, but a new form factor is probably the more realistic solution.


HDDs simply get replaced by SSDs. It already happens in laptops and (slowly) in servers, soon it will happen everywhere.
Topic archived. No new replies allowed.