One more time !
I know many people enjoyed the (rather long) story about time which I published a few months ago… well here are some additional information. We have put a lot of efforts since the 0.6.0 release (April) in order to fix the device jittering / drifting. Indeed, every single device must have a specific drift due to the lack of formal synchronization between the computer and the device itself. This drift can be reflected by the fact that even if pretending to acquire data at 512Hz, a device may acquire at 513Hz, 511Hz or even 512.1Hz. This results in a constantly growing difference between the computer time and the theoretical time of the samples (number of samples multiplied by the promised sampling rate). This drift should definitely be corrected over time in order not to mess up the sample dates… So for this reason, we included some drift correction in the acquisition server. Developers should look at the dedicated page.
During those developments, I had to look into the time measurement code (located in the openvibe system module). This code is platform specific and involves a couple of tricks in order to be as precise as possible.
Linux
On Linux, there is a function called gettimeofday which returns a timeval structure. It has a precision of one microsecond. Well, the granularity allows to handle up to one microsecond… in practice, it could be longer periods but anyway, it promises to be good enough.
Windows
On Windows, the most commonly used function is timeGetTime which returns an integer value with a precision of one millisecond. This is not as good as Linux and the documentation says we should not trust the returned value over 5/6 milliseconds but anyway, it is even more handy than the gettimeofday function since it returns an integer, not a structure !
Another kind of functions are the high precision counters (QueryPerformanceCounter and QueryPerformanceFrequency). Those two functions are able to give an extremely precise estimation of the clock, based on the CPU cycles.
The first Windows implementation of the zgetTime function (which gives a current time in seconds, 32:32 fixed point) used the high precision counters. However, while trying to solve the drifting issues, I realized that this implementation had a small drift over time. Indeed, it measures the time elapsed in the calling thread, not the real time. After 24 hours of measure, it resulted in a difference of a dozen of seconds between the Linux gettimeofday and the Windows high precision counters… This was enough for me to miss P300 detection after something like 5-10 minutes. (Yeah, I’m not a P300 champion π ). I modified the Windows implementation to call the timeGetTime function instead and based on this call, I had a difference of approximately 650 milliseconds with the Linux gettimeofday calls after 24 hours of measure… That means that I should be able to do a 1 hour long P300 speller session π !
Handling the returned values
Now, what I did not explain was how to handle the returned values. Indeed, each function returns a value in its own representation system and we have to convert them to the now famous 32:32 representation, minimizing the loss of information. Suppose the time we get from the functions is t, that its scale (as regarding to seconds) is s, and that we want to return it in fixed point 32:32 bits (c = 1<<32), then we have to compute the following :
On one hand, if c and t are big numbers, then it is more likely that (c * t) will quickly overflow. On the other hand, computing c / s or t / s can result in an important loss of information… even more when s can not be anticipated.
But… this kind of computation is very specific and is called a muldiv, pointing out the fact that you know you are doing a multiplication followed by a division and that the overall operation does not overflow.
Guess what, that’s actually possible to compute the results of this operation with optimal loss But you will have to remember your math basics !
So let’s forget that c, t and s can be big numbers, that (c * t) definitely overflows, that (c / s) and/or (t / s) can definitely be close to 0 or unnecessarily small. And let’s trust that (c * t) / s is just a reasonable value.
Then you can compute the optimal muldiv with the following formula :
t / s is the quotient of the Euclidian division of t by s. t % s is the rest of this Euclidian division. So finally, considering c as being (1 << 32), we have the following formula :
which turns into
On Linux with gettimeofday, s = 1 000 000 (as the result is returned in microseconds)
On Windows with timeGetTime, s = 1 000 (as the result is returned in milliseconds)
On Windows with high precision counters, s depends on the CPU architecture and is returned by the QueryPerformanceFrequency call… how convenient !
Organizing the bit shifts as described in the (rather long) story about time is fast for multiplications and divisions… but it is almost never as efficient as possible when it comes to precision. The described implementation gives an optimal error at the cost of more complex computations. Make your choice π