clock() timing inconsistencies

I'm running some timing experiments on several algorithms. In short, it reads data, performs an initialisation phase, then cycles over K different optimization phases until none of them lead to improvements.

I'm comparing a K=3 versus a K=4 run; I control is by a (global) static const integer hardcoded in the "highest level header" (only contains basic definitions and STL includes). This value is used in two or three spots to allocate a small array, and in several spots just as a loop boundary.

I time the total duration of optimization phase (aggregated) using clock(). I now noticed that the timing precision of the K=4 run is to the microsecond, whereas the K=3 run has the multiples-of-16 precision. For small benchmark instances, this results in the K=3 run having either 0 or 16 as a duration, whereas the K=4 run shows (probably accurate) 4-5ms durations.

The only thing changing between runs is the hardcoded value of the static const, and the hardcoded strings with output filenames in the main.

Is there any logical reason for this? Can I "force" the higher precision for both runs?
After a few more tests, it seems it's not the value of K that changes it; it just changes from build to build.
Topic archived. No new replies allowed.