Processor Calculations per sec

So I'm essentially wondering how many things (operations, comparisons, filling registers, loading to/from RAM, whatever else they do) your average 3 GHZ+ processor can actually accomplish every second. I'm aware of what GHZ means so my simplistic thought is that 3 billion cycles per second translates into 3 billion things per second.

Is that accurate? 3 billion simple if statements a second (the int==int variety)?

EDIT: dear me, I wrote "million" instead of "billion".
Last edited on
Giga = 10 ^ 9. http://en.wikipedia.org/wiki/Metric_prefix

And no that's not accurate. The number of clock ticks that various tasks take is not an easy question to answer. I did some reading on this and it depends on exactly which processor you have and what other instructions are going on exactly how long things take. Modern processors are so complex that you can't say a task takes x clock ticks and my processor does y clock ticks and so can do y / x of that instruction.
Giga = 10 ^ 9

Spotted that, brain's a-dying.

So.... the short answer is: no. The longer answer is: no and there's no reasonable way to measure it?

Dang it..
You could measure it using tests on your system or look up your specific processor.
Thanks, I don't think it'll be relevant anymore though, streamlined my collision detection to allow for 10000 objects without any slowdown. I can't think of a use for more than that.

For the record, it seems as though 3002 2-d rectangle collision checks was about all my 2.2Ghz core could handle (there's 8 of them but my program is single threaded).

EDIT: fixed typo
Last edited on
closed account (Dy7SLyTq)
if i could ask a sort of off topic question... a cycle is the amount of time it takes to execute one instruction right? is it a set amount of time per processor? if so does it scale to multi-threaded programs or would it slow down the cycle time?
I'm obviously no expert, but my assumption has always been that multi-core processors run on the same clock. The advantage would then lie in the ability to parallel process multiple instructions every tick. I think that's how it works.
a cycle is the amount of time it takes to execute one instruction right?


Not necessarily. For example, using a super-scalar architecture, better speeds can be achieved.

I got some books out of the library on this subject earlier this year, although I didn't understand probably even half of it... The shit they are doing in chips is absolutely insane! I highly recommend to anyone interested in this stuff to also read up on this stuff. You can also find some seriously interesting (if you're into that kind of thing) technical stuff on Intel's website.
closed account (9wqjE3v7)

a cycle is the amount of time it takes to execute one instruction right


It depends on the type of 'cycle' you are referring to. If you are talking about a machine/clock cycle, then no, in a classic RISC (reduced instruction set computing) CPU, without pipelining, there can be, of minimum, a fetch, decode, execute, mem access and write back cycle, totalling 5 machine cycles minimum. It essentially depends on the architecture and instruction being executed.
As for measuring computer performance, not that it's an SI unit or anything, but the rate of FLOPS (Floating Point Operations Per Second) is a widely excepted metric for measuring a systems performance. But even this has to be normalized to some extent to account for things like error correction and the like.
Topic archived. No new replies allowed.