If you want to see what a rabbit hole looks like, parse this sentence from the article :
"The beauty of this is that with a precise enough timer, you also solve the multithreading issue because nothing ever happens at exactly the same time."
In the first part the author expresses a technique which narrows the window for failure, and adds a fallacy which sounds good but isn't true.
This way of thinking (narrowing windows to the point where they are probabilistically rare "enough") has been the source of many bugs.
Urs Hoezle (VP at Google) once said something I really liked which was "At a large enough scale, statistically impossible things happen every day." It is painful to accept but I've seen it in action.
He'd be better off getting the time in seconds, and then choosing a random number as well. At least then the chance of collision would be a known probability!
I don't understand the use of clock_getres() here. A second of posix monotonic clock time means a second of real time, the clock resolution has no bearing on that. Seems to me that on any system where the claimed CLOCK_MONOTONIC resolution isn't 1ns this code will advance the clock at the wrong pace.
> This is not a hard task. Nothing we've done above requires more than reading the documentation carefully. Attention to details like this is what makes the difference between working and rock-solid software. #frencharrogance
This really looks like a problem with the Windows implementation of std::chrono::steady_clock more than anything else. This clock is nano-second resolution on most *nix platforms (incl. Mac OS X) and offers the strictly increasing guarantee required (without possibility of discontinuities).
It seems like (in the long term) it would be better to push some std::chrono::steady_clock Windows patches to stdlibc++/libc++/MSVC and use this instead of re-inventing the wheel.
Thanks for reporting this bug. We've fixed it, and the fix will be available in the next major version of VC (i.e. after 2013).
steady_clock and high_resolution_clock are now synonyms powered by QueryPerformanceCounter() converted to nanoseconds. QPC meets the Standard's requirements for steadiness/monotonicity.
Additionally, the CRT's clock() has been reimplemented with QPC. While this improves precision and conformance to the C Standard (as QPC is monotonic), we are aware that this is not completely conformant. Our CRT maintainer has chosen to avoid having clock() return CPU time advancing faster than 1 second per physical second, which could silently break programs depending on the previous behavior.
So it is expected that VS2014 should have the fixes.
On a sidenote, I've seen most of the code, if not all, posted by the OP already before while searching for the same functionality (eg the piece after "you will write your own function:" is followed by something that to me seems like a straight up copy from some FOSS project, even the comments seems to match - ok I cannot be 100% certain on this but if it's the case it would be nice to mention the source). I can't find it atm, but I am sure a C++11 clock compatible implementation using QPC has been posted on StackOverflow.
Function you are looking for Windows is possibly not QueryPerformanceCounter(). It's unreliable when you consider various hardware, especially in multithreaded applications on multi-core/CPU systems. Even more if you use Windows under VM. QPC can use RDTSC(P), but it's only one of its options, and even if RDTSC(P) is used it doesn't mean anything reassuring actually.
Go with timeGetTime(), remembering about calling timeBeginPeriod(1) early (usually at the beginning of application) to set minimum resolution for periodic timers to 1 ms (well, it will happen only if HW provides that much resolution), and calling timeEndPeriod(1) after you stopped working with time (usually at the end of application). Milliseconds don't give you high-resolution, but at least working in this resolution is reliable. Having us or ns garbage is hardly any better...
In recent years QueryPerformanceCounter is actually quite reliable since it's guaranteed not to change frequency during runtime.
timeGetTime() and company though operate at the highest frequency that application has specified, and as such can be quite a drain on portable power systems (laptop etc). So when one application calls timeBeginPeriod(1) it means the laptop needs to wake up more frequently and hence is less power efficient.
FTA: "Assuming servers are kept synchronized enough (the enough depending on your application), you may just solve the problem by acquiring time precisely enough."
How on earth are "sub-microsecond timers" supposed to be synchronized anywhere?
We have special hardware that syncs our servers to an atomic clock in Colorado. It's about +/- microseconds off. Not sure about sub microsecond tho. Another problem with this is you will run into clock drift and NTP time adjustment bugs. Honestly I stopped reading once he mentioned that time was going to be his secret sauce. There are just too many subtle issues with using that as your globally unique identifier.
"So; to cut a long story short, if you want an accurate performance measurement you're mostly screwed. The best you can realistically hope for is an accurate time measurement; but only in some cases (e.g. when running on a single-CPU machine or "pinned" to a specific CPU; or when using RDTSCP on OSs that set it up properly as long as you detect and discard invalid values)."
[...] for Intel Core Solo
and Intel Core Duo processors [...]: the time-stamp counter increments at a constant rate. The specific processor configuration determines the behavior. Constant TSC behavior ensures that the duration
of each clock tick is uniform and supports the use of the TSC as a wall clock timer even if the processor core
changes frequency. This is the architectural behavior moving forward.
Intel Architectures software developer system programming manual 17.13
A minor correction: POSIX only mandates CLOCK_REALTIME and CLOCK_MONOTONIC is optional (POSIX Advanced Realtime Extensions). Linux provides both as well as CLOCK_MONOTONIC_RAW, and some BSD like FreeBSD provide CLOCK_MONOTONIC_PRECISE.
Here's our (Bloomberg) version of this. The bsls::TimeUtil component is used to implement the bsls::Stopwatch. It supports high-res timestamps on OSX/Windows as well as Solaris, AIX, and HP-UX (SPARC, POWER, IA64). It's a good base to start from and we'll continually tweak it for more performance as platforms change (I think there a few patches in the pipeline).
Eh, what about not relying on timestamps at all in distributed systems? Use lamport or vector clocks if you want a notion of time in a distributed system. This article makes me more reluctant to consider quasardb.
[+] [-] ChuckMcM|11 years ago|reply
"The beauty of this is that with a precise enough timer, you also solve the multithreading issue because nothing ever happens at exactly the same time."
In the first part the author expresses a technique which narrows the window for failure, and adds a fallacy which sounds good but isn't true.
This way of thinking (narrowing windows to the point where they are probabilistically rare "enough") has been the source of many bugs.
Urs Hoezle (VP at Google) once said something I really liked which was "At a large enough scale, statistically impossible things happen every day." It is painful to accept but I've seen it in action.
[+] [-] IvyMike|11 years ago|reply
http://blogs.msdn.com/b/larryosterman/archive/2004/03/30/104...
[+] [-] xsmasher|11 years ago|reply
[+] [-] shin_lao|11 years ago|reply
In our case, we have some additional homework to make it work on several nodes that is well beyond the scope of this post
As for multithreading, the functions are guaranteed monotonic.
"nothing happens at the same time" is a reference to the laws of physics.
[+] [-] codexon|11 years ago|reply
[+] [-] jsnell|11 years ago|reply
> This is not a hard task. Nothing we've done above requires more than reading the documentation carefully. Attention to details like this is what makes the difference between working and rock-solid software. #frencharrogance
Um, right...
[+] [-] gilgoomesh|11 years ago|reply
It seems like (in the long term) it would be better to push some std::chrono::steady_clock Windows patches to stdlibc++/libc++/MSVC and use this instead of re-inventing the wheel.
[+] [-] stinos|11 years ago|reply
Hi,
Thanks for reporting this bug. We've fixed it, and the fix will be available in the next major version of VC (i.e. after 2013).
steady_clock and high_resolution_clock are now synonyms powered by QueryPerformanceCounter() converted to nanoseconds. QPC meets the Standard's requirements for steadiness/monotonicity.
Additionally, the CRT's clock() has been reimplemented with QPC. While this improves precision and conformance to the C Standard (as QPC is monotonic), we are aware that this is not completely conformant. Our CRT maintainer has chosen to avoid having clock() return CPU time advancing faster than 1 second per physical second, which could silently break programs depending on the previous behavior.
So it is expected that VS2014 should have the fixes.
On a sidenote, I've seen most of the code, if not all, posted by the OP already before while searching for the same functionality (eg the piece after "you will write your own function:" is followed by something that to me seems like a straight up copy from some FOSS project, even the comments seems to match - ok I cannot be 100% certain on this but if it's the case it would be nice to mention the source). I can't find it atm, but I am sure a C++11 clock compatible implementation using QPC has been posted on StackOverflow.
[+] [-] przemoc|11 years ago|reply
Go with timeGetTime(), remembering about calling timeBeginPeriod(1) early (usually at the beginning of application) to set minimum resolution for periodic timers to 1 ms (well, it will happen only if HW provides that much resolution), and calling timeEndPeriod(1) after you stopped working with time (usually at the end of application). Milliseconds don't give you high-resolution, but at least working in this resolution is reliable. Having us or ns garbage is hardly any better...
[+] [-] daemin|11 years ago|reply
timeGetTime() and company though operate at the highest frequency that application has specified, and as such can be quite a drain on portable power systems (laptop etc). So when one application calls timeBeginPeriod(1) it means the laptop needs to wake up more frequently and hence is less power efficient.
[+] [-] leif|11 years ago|reply
How on earth are "sub-microsecond timers" supposed to be synchronized anywhere?
[+] [-] dclusin|11 years ago|reply
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] profquail|11 years ago|reply
[+] [-] yodaiken|11 years ago|reply
[+] [-] batbomb|11 years ago|reply
[+] [-] Someone|11 years ago|reply
"So; to cut a long story short, if you want an accurate performance measurement you're mostly screwed. The best you can realistically hope for is an accurate time measurement; but only in some cases (e.g. when running on a single-CPU machine or "pinned" to a specific CPU; or when using RDTSCP on OSs that set it up properly as long as you detect and discard invalid values)."
[+] [-] shin_lao|11 years ago|reply
[...] for Intel Core Solo and Intel Core Duo processors [...]: the time-stamp counter increments at a constant rate. The specific processor configuration determines the behavior. Constant TSC behavior ensures that the duration of each clock tick is uniform and supports the use of the TSC as a wall clock timer even if the processor core changes frequency. This is the architectural behavior moving forward.
Intel Architectures software developer system programming manual 17.13
[+] [-] pbsd|11 years ago|reply
[+] [-] oso2k|11 years ago|reply
[+] [-] rdtsc|11 years ago|reply
[+] [-] eliteraspberrie|11 years ago|reply
For OS X, use mach_absolute_time, described here: https://developer.apple.com/library/mac/qa/qa1398/_index.htm...
[+] [-] shmerl|11 years ago|reply
[+] [-] nly|11 years ago|reply
[+] [-] bogolisk|11 years ago|reply
Really, nothing new to see...
[+] [-] apaprocki|11 years ago|reply
https://github.com/bloomberg/bde/blob/master/groups/bsl/bsls...
Component docs: http://bloomberg.github.io/bde/group__bsls__timeutil.html
[+] [-] strictfp|11 years ago|reply