# Sticky  Methods of Determining the Accuracy of a Watch



## Eeeb

We are indebted to *South Pender* and* Catalin* for developing this post. Since this is a sticky, please confine follow on posts to those germane to the topic.

​*Methods of Determining the Accuracy of a Watch*
WUS HEQ Members, March, 2010

There are several ways of determining just how much your watch is gaining or losing time against an absolute standard. These methods differ in sophistication and in the accuracy of your results. Let's consider three methods here. There are undoubtedly more than this, but these three are all we need to get the job done.

_*Method 1*__*: *__*The*__* Naked-Eye Offset Determination Method*_

That's a fancy title we made up for this the simplest and least precise of the methods we'll consider.

*Needed. *


access to a _precise_ representation of the time, this representation based on an atomic clock and referred to below as the *reference clock*
your watch. One source that is readily available as the reference clock is the time representation on the website:

http://www.time.gov/timezone.cgi?Pacific/d/-8/java.

The time zone can be changed easily once on this website. This site displays the time in the form Hr:Min:Sec as XX:XX:XX. The accuracy of the time display at any one time is given just below the time as "Accuracy within 0.X seconds." In this, 0.X can be as low as 0.1 and can occasionally be over one minute. Most of the time it is around 0.2. It is best to use this reference clock when it is showing either 0.1 or 0.2 accuracy. This suggestion applies with Method 2 below as well.

*Procedure. *With the 2-digit seconds (in the display XX:XX:*XX*) from the reference clock changing on your computer screen, hold your watch close to the display on the screen and, keeping an eye on both the reference clock and your watch, note the minute marker your watch's second hand lands on when the reference clock hits XX:XX:00 (or some other easy reference point). Let's say you are doing this at around 9:30. When the reference clock shows exactly 09:30:00, on what minute marker has the second hand of your watch just landed? Say the second hand has landed at the 9-minute mark (this could be seen as a "second marker" in the present context), while the minute and hour hands are registering 9:30; in that case, your watch is running 9 seconds fast. Or, if your watch's second hand landed at the 54-minute mark when the minute and hour hands were registering 9:29, your watch would be running 6 seconds slow.

A good way to track the time-keeping accuracy of your watch would be to set it to coincide_ precisely_ with the reference clock. That is, stop the watch and have it ready to restart at the second the reference clock hits XX:XX:00 (with, of course, the hour and minute correct). At that point, your watch will be absolutely precise, or at the identical time point as the atomic clock from which the reference clock is getting its data. Some time later, you can do the time check described in the above paragraph, and any displacement from what the reference clock is showing will index the gain or loss taking place in your watch over that time period. Thus, say that you set your watch to be exactly on with the atomic-clock reading, and 30 days later check how your watch is doing with respect to the atomic clock. If now (30 days later) it is indicating a time that is 3 seconds ahead of that on the reference clock, your watch is running fast at the rate of 3 seconds in 30 days. We often express watch accuracy in _seconds per year_ (or _spy_). Our result of 3 seconds in 30 days could be transformed into seconds per year by taking (365/30) x 3.0 or *36.5 spy*.

*Accuracy of the Method.* With practice, one can easily stay within 1 second accuracy for the period of time, and can probably get to .50-second accuracy. By the latter, we mean that you will be able to determine that the watch is more than, say, 2 seconds off and less than 3 seconds off, and so would register it as 2.5 seconds off perfect time. For short time periods between setting the watch to absolutely correct time and checking its accuracy, this represents a relative poor level of accuracy, but for a longer period-e.g., 6 months-it is adequate to give a sufficiently-accurate estimate of seconds per year accuracy.

_*Method 2: The Stopwatch Method*_

The provenance of this method is likely lost in the mists of time, but we are indebted to WUS Forum Member *Oldtimer2 *for bringing it to our attention recently. This method produces more accurate results than does the simpler approach in Method 1 above. What follows is what appeared in Oldtimer2's post, but edited to make it read consistently with the rest of this piece.

*Needed. 
*

reference clock as above;
stopwatch (and this can be a very inexpensive sports quartz stopwatch)
your watch.
 *Procedure. *With the 2-digit seconds (in the display XX:XX:*XX*) from the reference clock changing on your computer screen, it's easy to judge the accuracy offset in a test watch by eye to within a fraction of a second. First start the stopwatch the instant the reference clock reaches a convenient point, for example, XX:XX:00. Next, glance at your watch and stop the stopwatch at precisely the point the second hand of your watch lands on a minute marker that represents some arbitrary number of seconds ahead of the starting point, for example, XX:XX:10. 

_*Example*__:_ We _start_ the stopwatch the precise instant the reference clock shows 10:30:*00*. Now glancing at our watch, we _stop_ the stopwatch at the exact instant the second hand of the watch lands on the 10-minute (or second) marker with the hour and minute at 10:30. We now read the elapsed time on the stopwatch. If it reads 11.50 seconds, for example, this would indicate that the watch is running 1.50 seconds slow, or behind the reference clock (11.50 - 10.00). On the other hand, if the stopwatch reads 8.25 seconds, then we can conclude that the watch is running 1.75 seconds fast, or ahead of the reference clock (8.25 - 10.00). We can start the stopwatch at any convenient point on the reference clock and stop it at any convenient point on our watch some number of seconds beyond the starting time.  For example, we could start the stopwatch when the reference clock is at 10:30:20 and stop it when the watch second hand lands precisely on 10:30:30. Since these times are 10 seconds apart, any deviation from 10:00 on the stopwatch indicates some offset from perfect time (with numbers larger than 10:00, indicating that the watch is slow, numbers smaller than 10:00, that the watch is fast).

*Accuracy of the Method.* With this method, you should make several estimates using the same procedure each time, and these can be made in rapid succession taking overall no more than 2 to 3 minutes (for maybe 10 estimates). You would then average these estimates. This averaged result should be within about .10 seconds of exact correctness, but to some extent, of course, this level of accuracy will depend on the number of single estimates averaged. As Oldtimer2 summarized this method: "&#8230;any reaction time delay errors in starting/stopping the stopwatch tend to cancel. 

Plus timebase errors in the stopwatch are completely negligible over timescales of a few tens of seconds, [true for] even the most awful quartz movement ever made! I find that with practice five readings seems to easily give 0.10-second accuracy or better (and if you doubt this, you can always test the method by starting and stopping the stopwatch against the reference time source alone)."

The note above about time interval between separate timings applies with this method as well. To get a good estimate of the _seconds per year_ accuracy of your watch, you would set it exactly to the reference-clock reading first, and then, some time later, use the above method to determine offset over that time interval. As before, longer intervals will lead to more precise _spy_ values. 

_*Method 3: The Video Method*_

This method is the most accurate of the three, albeit somewhat more equipment-intensive as well. It appears that the notion of capturing on video the closeness of a watch second hand to the readout of a reference clock has been considered and described, over the years, by several of the more technically-expert members of the WUS HEQ Forum. The specific program described herein, however, is due to forum member *Catalin*, and this software addition brings this method into the realm of possibility for many watch enthusiasts. What follows is Catalin's own description of this method.

*Needed.*


 A fast (>1 GHz) PC connected to the Internet on a decent connection and synchronized to one of the major Internet time servers *just before the tests* (this probably provides better than 10 milliseconds (ms.) errors on the time on the computer. I use a freeware program called _AboutTime_ in order to sync my computer since the program also tells you the error after each sync, and having two consecutive syncs with very small errors is a good guarantee your computer is set very, very close to the actual atomic time);
 A decent 'watch program' on the computer that will display the time, including some fractions of a second, in a very careful way (so as to always have very constant and small delays). Unfortunately none of the major operating systems around are even soft-realtime, but I have modified one of my own programs (see below) on Windows, and I believe the errors are in the same 10-20 ms. interval and, more important, very constant (so as to be completely eliminated for practical purposes when you calculate the difference from two such similar measurements);
A decent (LCD) display with 60Hz refresh rate or better; the newer 120 Hz (some of which are also '3D ready') are even better;
 A decent camera that can do movies at 25/30 (maybe even 50/60 or 100-200 if you have a 120 Hz monitor) frames/sec.; ideally it should also do those movies in 'macro mode';
 A program that can display the movie from the above camera 'frame by frame' (VLC, even BSPlayer); and, of course
Your watch.

The program I use can be now found at:

http://caranfil.org/timing/setup_ear...0_122_beta.exe.

You will note that the 'main window' stays normally hidden and can be shown with either a click on it or just 'hovering' with the mouse over it (there is a setting to configure that). Normally the milliseconds are not shown, but if on activation either SHIFT or CTRL is pressed the 'millisecond mode' is activated. If both SHIFT and CTRL are pressed the 'seconds beep' mode is also activated. See below a post with pictures from the mini-movies I am using for timing tests.

*Procedure.* Once you take a few (2-3) short (10-15 seconds) movies, you go with them to the video player and 'hunt' for the *milliseconds* interval when the seconds-hand is advancing-most often around that moment that you will see one frame with one time displayed on the monitor and the seconds-hand in the first position, then a second frame with a later time displayed and the seconds-hand already in the final new position. In that scenario, the actual time is the average of the one in the first and the one in the second frame, and by looking at a number of such frames you can minimize the interval so as to have statistically better precision. Even better IMHO would be to 'catch' a frame where the seconds-hand is actually 'in motion"; in those cases I take in my measurements the *milliseconds* time of that frame as the actual time.

At the end of this description, there is a sequence of images like that described above. Note in these images, that the *milliseconds* are at *.743* and by actually looking at the full time and the time on the watch we can calculate that the watch is at about *+24.257* seconds from the Internet atomic time (ahead).

That time is 'written-down' (I placed it initially into a TXT and now in an Excel file), and some time later (I suggest 1 or 2 weeks for regular longer-term measurements), you will do the next similar measurement; in my case it was *+26.744* after two weeks, and from that, the difference was *+2.487* over two weeks which means a rate of about *+64.840* seconds over one year (our _seconds per year, or spy_ value discussed earlier). All those numbers are in one of my more recent PDF files, for instance: 

http://caranfil.org/timing/timing_data_20100314.pdf ).

*Accuracy of the Method. *With the above equipment and procedure, I believe you can easily measure with a precision clearly better than 100 milliseconds (and even down to the actual time for each frame on the monitor)-of course, probably nothing much better than 5-10 ms. but even at 50-100 ms., the results will be more than 10 times better than what we normally get with a quick single measurement with 'the human eye.' (Method 1 above.)


Pictures follow.


----------



## South Pender

*SOME MORE DETAILED DIRECTIONS ON USING THE*
*STOPWATCH METHOD FOR WATCH-RATE TIMING AND BEYOND*
7 May 2010

After considerable experimentation since the first posting of this sticky thread, I have developed some procedures that may improve the precision with which the Stopwatch Method may be employed.

*GENERAL REVIEW OF THE METHOD*

Just to briefly review this method, we access a reliable reference clock on our computer screen. I've used the NIST reference clock, but there do appear to be more precise alternatives. Still, the precision offered by the NIST site seems sufficient for general timing purposes. I wait until the site indicates "Accurate within 0.1 seconds," although not a great deal of imprecision attends accuracy postings of 0.2 seconds. I also do all my timing from the same computer, as differences can occur between computers. The NIST reference clock may vary in its accuracy through the day and/or between days (from a minimum of 0.1 seconds to as high as close to 1.0 second), but on my main computer for timing, it is most commonly operating at the 0.1-seconds accuracy level.

Next, we wait until the reference clock shows a convenient number with regard to the seconds-like XX:XX:X0 or XX:XX:X5 (for example, 10:13:00, 10:13:20, 10:13:05, 10:13:25, etc.), and as quickly as possible click *on* the stopwatch. At some number of seconds later, while looking at the second hand of our watch, we click *off* the stopwatch. Thus, for example, we might, while looking at the NIST reference clock, click the stopwatch on at, say, 2:24:00, and, now watching the second hand of the watch, click the stopwatch off at the moment the second hand hits 2:24:10. The stopwatch will now be stopped with an elapsed time showing on it. If that time is less than 10 seconds, your watch is running fast; if the time is greater than 10 seconds, the watch is running slow. Say that the time showing on the stopwatch is 9.87 sec. This means that the watch is running .13 seconds fast or +.13 seconds. (Let's agree to label this discrepancy between the reference-clock time and that on your watch as the watch's *offset*-meaning the amount it is "off" the reference clock.) You can now relate this offset from perfect time to the time since the watch was synchronized with the NIST clock and determine the rate of gain (or loss) over that period. Let's agree to call this discrepancy (gain or loss)-between the present offset time on your watch and the offset reading (if there was any offset) at the time you synchronized the watch with the reference clock-the watch's *drift* (referenced to a time period, as, for example, .10 seconds _drift_ over a 4-week period). 

*SOME ADDITIONAL DETAILS*

*1. Timing Interval*

In my previous coverage of this method, I suggested using a 10-second timing interval. That is, if you were to start the stopwatch when the reference clock showed, say, 12:45:*20*, you would stop it when your watch's second hand hit 12:45:*30*. I have now found that this timing interval, although fine if you wish to use it, is longer than strictly necessary. *I now use a 5-second timing interval*. For example, I start the stopwatch the instant the reference clock flashes, say, 3:42:*10*, and stop it the instant my watch's second hand hits 3:42:*15*. This is what I mean by _Timing Interval_. There is absolutely no loss in precision in going to a shorter timing interval, and I find that I can comfortably do 10 timings (from which I compute the average) in less than 2 ½ minutes with the 5-second Timing Interval. 

*2. Stopwatch Technique*

It is important, as noted previously, not to anticipate a time point, but instead to wait until you have _actually seen it_. This applies, of course, to both the time point on the reference clock and the second marker that your watch's second hand hits at the end of the timing interval. Once you have seen the time display or second hand hit the second marker 5 (or 10) seconds later than the NIST time display, it is important to _instantly_ snap the stopwatch. Too-fast or delayed clicking of the stopwatch will contribute to slight levels of inaccuracy. After one or two timing sessions, this becomes pretty easy. I have found that the variability among the timing results can be reduced quite substantially by attention to consistent, unvarying technique.

*3. Recording Results*

This is straightforward. I just quickly write down the elapsed time on the stopwatch, reset the stopwatch to 00:00, and wait for the next X0- or X5-second mark (for example 12:30:10, or 12:30:15, etc.). I do 10 (or 12; see Section 9 below) timings (or _timing trials_), and, as noted, can do this number of timings and write down the results in less than 2 ½ minutes overall.

*4. Establishing a Baseline*

Timing tests are done to determine the drift occurring over a known period of time. This means that we have to have an accurate reading of the time offset (if any) at the beginning of that period. To do this:

_(a) First synchronize your watch with the reference clock_ (e.g., the clock on the NIST website). This is straightforward. By whatever means the watch provides, start the watch the instant its second hand hits some convenient hour:minute:00 mark-such as 10:15:00. (You have already set the hour and minute hand to 10:15). Within a certain band of error, your watch will now be in exact agreement with the reference clock. 

_(b) Perform a Synchronization Check._ To gain greater precision, it is a good idea to then do a timing test, by the method discussed above. For example, despite your best efforts to achieve absolute agreement with the reference clock, you may find that your watch is, in fact, running .05 seconds faster (or slower) than the reference clock, i.e., has a +.05 sec. or -.05 sec. offset. This is your *baseline* for future rate checks. Make a note of this for future reference. Let's say that, at the synchronization session, you end up (despite your best efforts) with a +.05 sec. offset, and you do your follow-up timing test in one month's time. If at that time, you get a result that indicates that your watch is .15 seconds faster than the reference clock, its actual _drift_ (gain in this case) over that one-month time period is not .15 seconds, but rather .10 seconds (+.15 - +.05).

*5. Processing Your Results*

We have discussed elsewhere that individual observations of rate drift are prone to error (_sampling error_ in the language of the statisticians). However, this error can be greatly reduced by using _averages_ of a number of observations. I have alluded above to the use of 10 timings (on which to base the _average_ of the 10). This average will be far closer to the actual time offset than will any _single_ observation no matter how carefully and precisely it is obtained (even by a method like the Video Method).

There are two key indices you need to calculate from your 10 (say) timing trials: (a) the average (or *mean*) and (b) the *standard deviation*(I'll abbreviate this as_ *St. Dev.*_). Both are easily calculated via any simple engineering, scientific, business, or statistical hand-held calculator ($20 range). You just enter the values you obtained in the timing trials and then press the button for each of these indices. Let's consider an example (some numbers I just ran for my own watch). I ran 10 timing trials and got the following results, using a 5-second timing interval:

4.91; 4.88; 4.92; 4.89; 4.96; 4.85; 4.95; 4.91; 4.86; 4.90.

The _mean_ is *4.903*; the _St. Dev._ is *.0353*. This means that the *mean offset* is .097 seconds (5.00 - 4.903) and that my best estimate of how far my watch is "off" the exact atomic time is .097; that's what we mean by mean offset. Since my baseline (luckily) was 0.00 seconds, this means that, over the 41 days in the time period since synchronization, the watch has gained .097 seconds. Thus, we understand the *drift* over that 41-day period as +.097 seconds. If, on the other hand, my baseline had been, say, +.025 seconds (because of not starting my watch at exactly the right time), the _drift value_ would have been +.097 - +.025 = +.072 seconds.

As noted, the variability of the time values-indexed by the _St. Dev._-depends on the consistency with which the stopwatch readings are made. I have by now found almost all of my _St. Dev._ values to lie between about .025 and .060. The _St. Dev._ for the 10 time measurements above was .0353, quite typical for me. However, this consistency will depend to some extent on the perceptual-motor precision exercised in the timing trials. This is not really all that crucial (the mean is of greatest importance), but reducing the _St. Dev._ for a set of timing observations as much as possible is all to the good. We'll return to this below.

*6. Prorating to Seconds per Year (SPY)*

Our most common metric for understanding watch accuracy (at least on the HEQ forum) seems to be _seconds per year_, or _SPY_. So, if we are seeing AA seconds drift over a period of BB days, our _SPY_ value will be (365/BB) ´ AA. In our above example, the +.097 drift value was over a period of 41 days, so that my best estimate of the watch's accuracy in seconds per year is (365/41) ´ +.097 = *+.8635* _SPY_.

*7. Time-Period Issues for the Estimation of Drift*

It is necessary to distinguish between (a) what we have referred to above as the *Timing Interval*-which is approximately the number of seconds between the starting and stopping of the stopwatch (it will be exactly that number if the watch is in perfect synchronization with the reference clock)-and (b) the elapsed time between the synchronization session and the testing (or follow-up) session when _drift_ is being assessed. Let's refer to this between-session elapsed time as the *Time Period* employed in the assessment of drift.

Although the precision of each timing session can be easily evaluated by the methods described above, drift estimates really depend on a _Time Period_ of weeks or, preferably, months for our drift estimates prorated to one year to be free of excessive error. Jumping ahead just a little, I can calculate the error-interval magnitude associated with the seconds-per-year estimate above of +.8635 _SPY_. Based on a Time Period of 41 days (see above), I can state with .95 probability of being correct that the watch's _drift_ can be captured by an interval running from *+.5381 sec. to +1.1889 sec. *

However, consider what this error interval would look like if my prorated drift value of +.8635 _SPY_ had been based on a Time Period of _2 days_, instead of 41 days. Our interval would run from *-5.8070 sec. to +7.5340 sec.* That degree of uncertainty (± 6.6705 seconds, as opposed to the ±.3254 seconds for the 41-day Time Period) makes the 2-day-based _SPY_ drift estimate of virtually no value. Had our Time Period been, instead of 41 days, 3 months, on the other hand, our _SPY_ drift estimate would be within ±.1462 seconds error.

The degree of error in the _SPY_ drift estimates does, of course, fall on a continuum paralleling a Time Period continuum ranging from the ridiculous (like several days) to the extremely desirable (like several months). In my opinion, any _SPY_ drift estimate based on a Time Period of less than 3 weeks to 1 month simply reflects wasted time and effort. 
---------------------------------------------------------------------------------------------------------------------
_For many horological experimenters, the foregoing is likely sufficient to obtain really informative and usable results. However, for those who wish to delve more deeply into this topic, more procedures follow._
_---------------------------------------------------------------------------------------------------------------------_

*8. Setting Confidence Intervals around Our Timing Estimates*

This is a little (only a little) more complicated and is not strictly necessary if you are satisfied with a single estimated _SPY_ value. We do, however, have the wherewithal to set limits around our estimate to capture the possible error in this estimate, as alluded to above.

*(a) For a Mean Measurement Arising from a Single Timing Session*

_(i) For 10 Timing Trials (that is, 10 measurements at a session):_

Take your _St. Dev._ and multiply it by *.7153*. Call this product _C_. Then the value
*Mean Offset ± C* is a 95% confidence interval for the exact offset. This means that this interval has probability .95 of capturing the exact offset from atomic time. Let's take an example. In the above illustration of 10 timings, we obtained a mean offset of +.097 seconds. Since our _St. Dev._ was .0353, we obtain a value of *.0253* (.0353 ´ .7153) for _C_. Therefore, our 95% confidence interval is .097 sec. ± .0253 sec. or (.0717 sec., .1223 sec.). Interpretively, we can say that the probability is .95 that the _true offset_ is between .0717 seconds and .1223 seconds. 

_(ii) For 20 Timing Trials (that is, 20 measurements at a session):_

Take your _St. Dev_. and multiply it by *.4680*. This is your _C_ value for 20 timings. Again, our 95% confidence interval will be *Mean Offset ± C*. Again using our earlier example, but now pretending that it was based on 20 timings instead of 10, we have _C_ = *.0165*, and our 95% confidence interval is .097 sec. ± .0165 sec. or (.0805 sec., .1135 sec.). This narrower bandwidth for our 95% confidence interval reflects the greater precision we have with 20, as opposed to 10, timings at the session.

*(b) For an Estimate of Drift Arising from Two Timing Sessions*

Here we have to realize that there will be random error associated with mean values obtained at _both_ the initial synchronization session and the second session some weeks or months later used to assess _drift_ over a known time period. This means that we need _St. Dev._ values from both sessions. A simplifying procedure is to use *the average of the two St. Dev. values* in what follows, or *Avg. St. Dev*. If you've forgotten the _St. Dev._ value from the synchronization session, just use the _St. Dev._ value from the second session in what follows. 

_(i) For 10 Timing Trials (10 timings in each session):_

Take your *average St. Dev.* value (as noted above and preferable) or single (Session 2) _St. Dev._ value (if you've forgotten the _St. Dev._ value from the synchronization session), and multiply it by *.9396*. Call this _D_ (for *d*rift). Our best estimate of drift is our offset measurement at Time 2 minus our offset estimate at Time 1 (the synchronization session), or, *Mean Offset T2 - Mean Offset T1*. Call this value our _Drift Estimate_. This value will increase, of course, as the time lag between T1 and T2 increases, so that we have to understand it as _drift over XX days_. Since the _XX_ in _XX_ days is arbitrary, let's agree to express drift in terms of _seconds per year (or SPY)_. To obtain this, we simply multiple our _Drift Estimate _by (_365/XX_). For simplicity, let's call this the *SPY Drift Estimate*. The corresponding (prorating) factor for the confidence interval, which we will label *SPYD* follows:

*SPYD* = *(342.954 ´ Avg. St. Dev.)/#days of time lag* _or_ 

*SPYD** = (342.954 **´** Time 2 St. Dev.)/#days of time lag*. 

Thus, our 95% confidence interval for the true, exact _yearly_ drift of our watch is:

*SPY** Drift Estimate ± SPYD*.

*Example.* In our earlier example, we saw that we prorated our .097 drift measurement-based on a 41-day time lag between Session 1 (synchronization) and Session 2-to *+.8635* _SPY_ [.097 ´ (365/41)]. Thus, our value for *SPY Drift Estimate *is +.8635 _SPY_. Further, we saw that our Time 2 _St. Dev._ value was .0353. At the synchronization session, we obtained a _St. Dev._ (over the 10 timings at that session) of .0425. Thus, our _Avg. St. Dev._ value is .0389. With a 41-day time lag between Time 1 and Time 2, our value for *SPYD* in this case is *.3254 sec.* [(.0389 ´ 342.954)/41]. Finally, our 95% confidence interval for the watch's drift in seconds per year is *(+.5381 sec., +1.1889 sec.)*, that is +.8635 seconds ± .3254 seconds. We would interpret this result as indicating that there is a .95 probability that the interval between +.5381 seconds and +1.1889 seconds contains the true _SPY_ drift value for this watch. 

_(ii) For 20 Timing Trials (20 timings in each session)_: 

All that changes for this situation is the value of *SPYD*, which now becomes:

*SPYD** = (233.663 ´ Avg. St. Dev.)/#days of time lag* _or_ 

*SPYD** = (233.663 **´** Time 2 St. Dev.)/#days of time lag*. 

Thus, again, our 95% confidence interval for the true, exact _yearly_ drift of our watch is:

*SPY** Drift Estimate ± SPYD*.

*Example. *Suppose that, in our earlier example, the _SPY Drift Estimate_ had been based on 20 timings per timing session, instead of 10. So, our _SPY_ value is still +.8635, but our new _SPYD_ value would be *.2217* seconds [(.0389 ´ 233.663)/41]. (Earlier, with 10 timings per session, this value had been .3254, illustrating again the accuracy-enhancing effects of taking more sample timings.) So, in this case our 95% confidence interval is +.8635 seconds ± .2217 seconds or *(+.6418 sec., +1.0852 sec.)*. We would therefore conclude, with .95 probability of being right, that this interval contains the true _SPY_ drift value for this watch.

*(c) Confidence Coefficients*

We have used the confidence coefficient of .95 in all cases above, and it is undoubtedly true that this is the value employed in the construction of most confidence intervals in all fields that use inferential statistics. Nonetheless, more or less stringent confidence coefficients can be used, the most common alternatives to .95 being .99 and .90. Anyone wishing to probe this further can PM me, and I'll work out the corresponding _SPYD_ values. 

*9. A Final Bit of Statistical Esoterica*

Well, it's really not that esoteric. One way to robustify our estimates of means (making them more robust to distributional anomalies) is to use what statisticians have known about for decades-_Trimmed Means_. This is little more than common sense, and would be obvious to anyone. When you have a set of say 10 observations like offset values, you can calculate their _mean_ as above, or you can _trim away_ the values at each end of the set by omitting them from the calculation of the mean. Such a mean calculated on a trimmed distribution like this is referred to as a _Trimmed Mean_. It should be obvious immediately that such a mean value will very likely be more accurate than an untrimmed mean value, since any anomalous observations, or _outliers_, at each end of the distribution have been eliminated from the calculations. In the use of the Stopwatch Method, for example, it is possible to click the stopwatch on just a little too early on a single timing, and doing this will lead to too large a value for the time offset. Similarly, clicking on the stopwatch just a fraction of a second too late will lead to too small a value for the time offset. And, of course, this can work at both ends of the timing process-clicking the stopwatch on in response to the reference clock and clicking it off in response to your watch's second hand.

We often use a 10% trimming rule with distributions that might contain outliers. With 10 observations, this would mean dropping the largest offset value and the smallest (10% at each end of the distribution). When I do timing experiments, I sometimes take 12 observations and drop the largest and the smallest from the set before calculating the mean. This represents an 8.33% trimming and leaves me with 10 usable observations. Here's an example. Consider the following timing values obtained with a 5-second Timing Interval:

4.91; 4.82; 4.95; 4.87; 5.08; 4.83; 4.97; 4.89; 4.84; 4.95; 4.74; 4.90.

First, the untrimmed mean and standard deviation: mean = 4.8958333&#8230;, so that the Mean Offset is .1042, and the standard deviation = .0872. It seems likely that Observations 5 and 11 may well be outliers arising from choppy stopwatch technique-(a) starting too early and/or stopping too late (5.08) or (b) starting too late and/or stopping too early (4.74).

If we now trim the distribution to the middle 10 observations, our new (trimmed) mean is 4.893 (so that the Mean Offset is .1070), and the standard deviation is .0531. A general principle used by statistical researchers is to consider an observation an outlier if it is one standard deviation or more displaced from the next observation in the distribution. In the above set of timings, the 5.08 is well over one standard deviation (.0872) above the next highest recorded value (4.97), and the 4.74 is almost one full standard deviation below the next lowest observation (4.82). In this case, then, trimming the distribution would be a good idea, with the trimmed mean likely to be a better estimate of the true offset value than the untrimmed mean. I should note that we _never_ trim from only one end of a distribution or use a different trimming percentage at each end.

Trimming does not introduce bias since we trim equally at each end of the distribution. What it does do is stabilize the mean estimate, freeing it up from distortions caused by outliers. The other consequence of trimming is, of course, a substantially-reduced standard deviation.

The sampling distribution of trimmed means, unfortunately, is not identical to that of untrimmed means, with the consequence being that the confidence-interval procedures described above cannot be used exactly as they are with untrimmed means. If, however, you use the untrimmed standard deviation in the calculation of the confidence intervals, you will come very close to exact confidence intervals for trimmed means.

*10. Summary*

There's quite a bit of material above, and I've included this level of detail (particularly Points 8 and 9) for the benefit of those with a little statistical training or those wanting to delve deeply into the topic. However, it is certainly not necessary to go beyond Point 7 in the above in order to obtain very useful and accurate timing information using the Stopwatch Method. All that is necessary is to: (a) try to develop a consistent stopwatch process (which is really dead easy) so that your standard deviations are not too high, (b) use a sufficient number of observations to produce reasonably stable Mean Offset estimates (10 observations being completely satisfactory), (c) use a reasonable Time Period so that your prorated _SPY_ estimates have reasonably small error bounds (as noted, a minimum of 3-4 weeks is recommended). If you do all these things, you will have extremely precise values from which to judge the accuracy of your watch.

One last point: Although some of the above may seem to some readers like advanced statistics, it really is anything but that. Almost all of the principles described have been known for 100 years and are routinely taught to undergraduates (and high-school students) encountering statistics for the first time.

I am fully prepared to answer questions on this piece and will revise it if necessary-with new information, and needed clarifications and corrections-in the future. Anyone who would like to comment on this material or suggest revisions or additions is more than welcome to PM me.


----------



## Catalin

*Some extra quick-info on chrono-method vs video-method*

1. Ideally you should try both and only after actually trying in parallel get to conclusions ;-)

2. Keep in mind that BOTH methods rely on internet time (or a good GPS time) - for internet time there are very simple programs like AboutTime (I sync until 2 consecutive errors are both under 10 ms) and there are more advanced (and more accurate when used properly) programs, like the ntp packet - for Linux you probably know where to find out more and for Windows you can use the one from http://www.meinberg.de/download/ntp/windows/[email protected] for instance with a command-line like:

*"c:\Program Files\NTP\bin\ntpdate.exe" -b -p 8 -t 8 time-a.nist.gov wwv.nist.gov time-a.timefreq.bldrdoc.gov time-nw.nist.gov nist1.symmetricom.com
* 
For the chrono method you can also use the Java/browser applet started from http://www.time.gov/ - if the network conditions are *very similar* and you do 20 readings each time at two weeks interval the resulting accuracy can be much better than the one displayed by the actual applet (which is a kind of worst-case scenario and very often for me is 400 milliseconds).

3. If for various reasons you can only try the chrono method you MUST first do first AT LEAST ONE 'self-test' - meaning a set of at least 10 readings (I strongly recommend 20 readings) in which you try to measure (with the chrono) precisely 10 seconds on the test-watch - place the resulting 10-20 numbers in an Excel / OpenOffice Calc file and then calculate for them the AVERAGE() and the STDEV() which should tell you how accurate you can get IN IDEAL CONDITIONS. You will see that most likely the average will *not* be 10.000 seconds and the standard deviation will only get under 100 milliseconds after some training !!!


----------



## dwjquest

This article at NIST discusses aspects of stopwatch and timer calibrations. Includes information on reaction times, etc. Should be informative for anyone who desires to determine accurately the rate of a quartz watch.

http://tf.nist.gov/general/pdf/2281.pdf


----------



## qiongyi

there are so many useful tips on how to detertiming the accuracy of a watch, so helpful tips! wow, i learn a lot here! thank you!:thanks


----------



## Catalin

*Another thing ...*

This one should NOT make a huge difference if you follow some of the other advices carefully but might explain some of the numbers that you will see in your videos and more important will certainly explain why you should do things that way - with an internet time-sync just before starting your videos !

Certain computers have a much larger 'internal time drift' than others, and some are really bad - so it is not impossible to see the COMPUTER time drifting with more than 1 millisecond in about one minute - the more stable computers tend to generate the same millisecond numbers in corresponding frames - so a good watch in two videos 5 minutes apart will have the seconds-hand moving between let's say .221 and .252 in both videos, while in the presence of computer drift in the second video it might move between .226 and .257 !!! The difference is more obvious if you do video tests one week on one computer and next week on another, and you see that the videos on one instance were very constant in numbers and in the second instance there was a small drift ...

The above thing will only become a problem if you have one such very bad computer *and *you do a *lot *of timing videos in sequence - in that case you just need to repeat the internet time-sync every 10-15 minutes or so!


----------



## South Pender

*Establishing a Precise Baseline*

*The Importance of Establishing a Precise Baseline for Timing Measurements*

As I was setting up a couple of watches for timing trials on the weekend, it occurred to me just how important it is to establish a precise baseline value for all future timing measurements. When assessing the accuracy of a watch movement, unless we are using sophisticated methods that capture the oscillation rate of the quartz crystal (procedures that are well beyond most watch owners in terms of both the necessary equipment and the expertise), we generally do this by comparing the extent to which the watch is "off" (that is, deviant from some precise time source) at a particular time point. However, in order to know what this means, we need to know how much "off" the watch was at some previous time period. The difference is termed the "drift" of the movement over that time period, which can be transformed into the _seconds-per-year_ (or _spy_) scale. 

Let's look at an example. You wish to determine the accuracy of your new Yamamoto Super Duper Quartz watch. You proceed to set it as precisely as possible to the atomic clock at Time 0. One month later, you calculate the offset from perfect time and then relate that to the time period of one month in order to get an estimate in _spy_. Let's say you get the following values: Time 0, set to the atomic clock (so 0 offset _assumed_). At Time 1, you find that your watch is .25 seconds ahead of the atomic clock-by one of the methods suggested in the earlier posts in this thread, the Stopwatch Method or the Video Method. You thus conclude that your watch has gained exactly .25 seconds in one month (the watch's drift), and, therefore, is running at the rate of +*3.0 spy* (+.25 ´ 12 months).

The problem with this procedure is that your starting value-what we term the *Baseline Value*-may not be truly 0 as you had assumed. When you set a watch to be as close as possible to the exact time, you stop the second hand and then release it (by pushing in the crown) as closely as possible to a time signal of XX:XX:00 (such as 10:45:00). However, it would be very unlikely that you actually started the watch at exactly 0 seconds, simply because of human perceptual/motor error. You would be pretty close, but not precisely on the mark. So, using our earlier example, suppose that, unknown to you, you actually push in the crown a little late-say .15 seconds late (entirely possible). This means that your starting value is not 0, as you thought, but rather -.15 seconds. Now, one month later, when you calculate the drift and get the +.25 seconds from your calculation, your actual drift value is not +.25 seconds over that month, but rather *+.40* seconds: .25 - (-.15), and your best estimate of drift in seconds per year is *+4.8*, rather than *+3.0*.

Or, of course, it could work the other way. Let's say that, instead of your being slow in starting the watch with reference to the atomic clock, you push in the crown too soon, say, .15 seconds too soon, so that your actual starting offset is +.15 seconds. When you then go to determine drift after one month and get the +.25 value noted above (and the +3.0 _spy_ year estimate), this +.25 value should, in fact, be referenced to the true starting value of +.15. This would mean that the watch had gained only .10 seconds in the one-month timing period, and that your correct estimate of seconds per year would be *+1.20* _spy_ (instead of the +3.0 you had calculated).

So the solution is very simple. When you set your watch to as close as possible to atomic time at the start of the timing trials, follow this up right away with a timing test. That is, do at Time 0, what you will do later at Time 0 + XX, and estimate the _starting deviation_ from exact atomic time; that will be your _baseline value_ for all future assessments of drift. If we go back to the previous examples, if we had done this in, say, the second example, our baseline would have been +.15 seconds. Then all future timing results would be referenced to that value, rather than to 0, and the resulting drift estimates would be relatively precise.

In my own timing experimentation, I've been able to get closer to 0 in my starting deviation from precise atomic time than + or - .15 seconds, but, in fact, it really doesn't matter just what your starting deviation-or baseline value-is. This is because you are interested in _the change_ (drift) from that baseline value over time. It is the latter that indexes the accuracy of your watch.


----------



## Catalin

*Re: Establishing a Precise Baseline*



South Pender said:


> *The Importance of Establishing a Precise Baseline for Timing Measurements*
> ...


Yes, that is essential and is good that we have now strengthened it.

That also explains why on DST changes we need to do two sets of measurements for all watches in our tests which do not have 1-hour quick adjustments - the first measurement 'closes' the previous interval, then we need to adjust the DST, and then a new baseline has to be taken !!!


----------



## webvan

Eeeb said:


> *VIDEO METHOD - HIGH SPEED CAMERA* - decent camera that can do movies at 25/30 (maybe even 50/60 or 100-200 if you have a 120 Hz monitor) frames/sec.; ideally it should also do those movies in 'macro mode';


Artec has kindly sent me his Casio EX-F20 high speed video camera to see if it would help get quicker results with the video method. As a reminder, with a 25fps camera, the resolution is 0.04 seconds and therefore the accuracy of the method over 24 hours is of 14spy and over a week of 2spy.

Tried it immediately but unfortunately the high speed mode requires a lot of light so I wasn't able to get a reading during my usual night-testing.

Tried again outdoors and while it's pretty fascinating to see the backlask on the seconds had of my 8F35 at 210fps (480x360 resolution), it's really hard to read the time display on my laptop as it seems to move in increments of 0.02s, i.e. a refresh rate of 50fps. Unfortunately the setting below that is a 30-210fps. Can't fine-tune it, probably an auto fps mode, will check the manual to see if it can be forced to 100fps.


----------



## South Pender

webvan said:


> Artec has kindly sent me his Casio EX-F20 high speed video camera to see if it would help get quicker results with the video method. As a reminder, with a 25fps camera, the resolution is 0.04 seconds and therefore the accuracy of the method over 24 hours is of 14spy and over a week of 2spy.


Interesting. If resolution is .04 seconds, this means that you should generally be within .04 seconds of the exact time shown on the screen. Therefore, if you do a _single timing_ with the Video Method, your error bandwidth will arise from a combination of this "camera error" and the existing clock error, so that your true (full) error bandwidth might be something like ± .06 seconds. This follows directly from elementary theorems on variances of linear combinations. 

With the Stopwatch Method, the error bandwidth for a single timing arises from a combination of clock error (as is true for the Video Method) and perceptual error (which replaces camera error with the Video Method). In my trials, perceptual error has an error bandwidth of about .10 seconds (and thus is considerably larger than the camera error encountered with the Video Method), which, when combined with the same clock error as seen with the Video Method, leads to a true (full) error bandwidth of something like ± .11 seconds. Thus, the error associated with a single Stopwatch Method timing result is about twice as large as that with a single Video Method timing result.

However, consider what happens when the results from a number of timing trials are averaged. If we take 10 results with the Stopwatch Method, the error bandwidth associated with the _mean_ of the 10 results becomes about ± .035 - .040 seconds, this mean being more accurate than a single Video Method reading. Of course, the same proportional reduction would apply to an average using the Video Method too (this is all governed by very elementary statistical theory), so that, if, instead of taking a single Video Method result, we took 10 and averaged them, the error bandwidth would be reduced to about ± .02 seconds. 

In my timing trials, I've been using 40 trials with the Stopwatch Method. This has reduced my error bandwidths to about ± .02 seconds. Clearly, this is more than adequately precise for our timing experiments.


----------



## South Pender

webvan said:


> Artec has kindly sent me his Casio EX-F20 high speed video camera to see if it would help get quicker results with the video method. As a reminder, with a 25fps camera, the resolution is 0.04 seconds and therefore the accuracy of the method over 24 hours is of 14spy and over a week of 2spy.


Not quite true. If the error associated with a single timing at the beginning of the 24-hour period is .04 seconds, as you have stated, it follows that there is also a similar error associated with the single timing at the end of the 24-hour period. The change (or drift) estimate relies on both measurements. Thus, the total error in the assessment of drift--by which you estimate accuracy--is about .06 seconds, or about *22 spy*. This is a perfect illustration of why attempting to estimate _spy_ using a one-day timing period will necessarily be far too imprecise.

However, your assertion of an error bandwidth of .04 seconds accounts only for camera error. What about clock error? This must be factored in, as noted in my just-preceding post. Once we do this, we get a single-measurement bandwidth of about ± .06 seconds, and this applies at each measurement point. Since two measurements are required to evaluate drift (that is to get a _spy_ value), the actual error bandwidth of the change, or drift, calculation will be about ± .08 - .09 seconds. Given this fact, any estimate of _spy_ using a single observation at each time point with the Video Method will produce a total error bandwidth of about ± 31 seconds for a one-day assessment of _spy_, and about ± 4.5 seconds for a _spy_ estimate based on a one-week time period--that is, taking the offset from the atomic clock at Time 1 and then again at Time 1 + 7 days and using the change between the two as an estimate of drift, prorated to an annualized _spy_ value.


----------



## Catalin

webvan said:


> Artec has kindly sent me his Casio EX-F20 high speed video camera to see if it would help get quicker results with the video method. As a reminder, with a 25fps camera, the resolution is 0.04 seconds and therefore the accuracy of the method over 24 hours is of 14spy and over a week of 2spy.
> 
> Tried it immediately but unfortunately the high speed mode requires a lot of light so I wasn't able to get a reading during my usual night-testing.
> 
> Tried again outdoors and while it's pretty fascinating to see the backlask on the seconds had of my 8F35 at 210fps (480x360 resolution), it's really hard to read the time display on my laptop as it seems to move in increments of 0.02s, i.e. a refresh rate of 50fps. Unfortunately the setting below that is a 30-210fps. Can't fine-tune it, probably an auto fps mode, will check the manual to see if it can be forced to 100fps.


Very interesting!!! However please also note my original reference to the refresh rate of the monitor - since I have an ordinary camera the way I am trying to use the video method is based more on the frame-rate of the monitor (than on the frame-rate of the camera), and even with my pretty old camera at 30 fps in 2-3 consecutive videos I can very often get enough cases where I can observe the 60 fps intervals from the monitor - meaning intervals of around 17 milliseconds


----------



## webvan

Yes same here with my 25fps camera, except it's 20ms. I was mostly interested in seeing what a high speed camera could do to improve the video method for observations over a shorter period of time but it seems it's not much, if anything. The monitor being the limiting factor and also the requirement for light at these frame rates. Still, these cameras create impressive slow motion videos so it was time well spent ;-)


----------



## Catalin

webvan said:


> Yes same here with my 25fps camera, except it's 20ms. I was mostly interested in seeing what a high speed camera could do to improve the video method for observations over a shorter period of time but it seems it's not much, if anything. The monitor being the limiting factor and also the requirement for light at these frame rates. Still, these cameras create impressive slow motion videos so it was time well spent ;-)


Yes, those slow-motion videos can be indeed spectacular :-!

But returning to the video method - now I realize that for plenty of people things might not be as easy as in my example - using a 30fps camera and a 60fps monitor could generate a slightly different kind of 'frame hunt' than when using a 25fps camera and a 60fps monitor - my old camera can only do 15 or 30 fps so I can't easily test the 25fps results but that sounds as an interesting test some day (or eventually a test with 30fps camera and 50fps monitor rate) ;-)


----------



## South Pender

Catalin, why do you not just take several video readings (say 8-10), and use their average reading as your data point. The error component is reduced by √_n_, where _n_ is the number of readings you would take. In other words, if your error from a single observation is 20 ms., your error associated with the average of _n_ = 10 readings would be 6.32 ms. That way, you'd overcome most of the frame-speed limitations. Or perhaps you do average (I can't remember), in which case, how many readings do you take?


----------



## Catalin

South Pender said:


> Catalin, why do you not just take several video readings (say 8-10), and use their average reading as your data point. The error component is reduced by √_n_, where _n_ is the number of readings you would take. In other words, if your error from a single observation is 20 ms., your error associated with the average of _n_ = 10 readings would be 6.32 ms. That way, you'd overcome most of the frame-speed limitations. Or perhaps you do average (I can't remember), in which case, how many readings do you take?


I believe the assumption above is ONLY valid when you deal in 'continuous values' where the errors are pretty random and probably smaller than any of the known sources of non-random errors - with the video method (and decent 'hardware') the values are not continuous but instead have certain 'quantified' steps in multiples of about 16.66 ms for the monitor and 33.33 ms for the camera.

I am generally making 2-3 videos of around 10 seconds (which provides about 20-30 pairs of relevant frames) each but not in order to compute clear averages but instead in order to be able to get to the higher degree of accuracy from the monitor steps (which is obviously twice better than the 'default one' from the camera).

It does not make enough sense at this point to try anything more than that - given the fact that the error from the initial Internet time-sync is still in the range of 10-20 ms (maybe closer to 5 ms. now that I use ntpdate with multiple servers) and that the error on the computer clock seems to be in the range of +1 ms/minute ...

Also from what I have seen many of the watches in my tests seem to 'move' the seconds hand in something around 10 ms but I have also seen a few clearly going from one position to the next in an interval higher than 33ms.


----------



## South Pender

Catalin said:


> I believe the assumption above is ONLY valid when you deal in 'continuous values' where the errors are pretty random and probably smaller than any of the known sources of non-random errors - with the video method (and decent 'hardware') the values are not continuous but instead have certain 'quantified' steps in multiples of about 16.66 ms for the monitor and 33.33 ms for the camera.
> 
> I am generally making 2-3 videos of around 10 seconds (which provides about 20-30 pairs of relevant frames) each but not in order to compute clear averages but instead in order to be able to get to the higher degree of accuracy from the monitor steps (which is obviously twice better than the 'default one' from the camera).
> 
> It does not make enough sense at this point to try anything more than that - given the fact that the error from the initial Internet time-sync is still in the range of 10-20 ms (maybe closer to 5 ms. now that I use ntpdate with multiple servers) and that the error on the computer clock seems to be in the range of +1 ms/minute ...
> 
> Also from what I have seen many of the watches in my tests seem to 'move' the seconds hand in something around 10 ms but I have also seen a few clearly going from one position to the next in an interval higher than 33ms.


Good points. However, your point about averages only making sense with continuously-distributed random variables is not quite correct. Most of the data that we analyze in any branch of research is in the form of a discrete (in steps) variable overlaying a continuous variable. The numbers are discrete steps because of the impossibility of capturing the infinite number of observations associated with the underlying continuous random variable. This fact, however, doesn't stop us from computing means, standard deviations, and many other statistics from such discrete variables.


----------



## Catalin

South Pender said:


> Good points. However, your point about averages only making sense with continuously-distributed random variables is not quite correct. Most of the data that we analyze in any branch of research is in the form of a discrete (in steps) variable overlaying a continuous variable. The numbers are discrete steps because of the impossibility of capturing the infinite number of observations associated with the underlying continuous random variable. This fact, however, doesn't stop us from computing means, standard deviations, and many other statistics from such discrete variables.


That is correct in certain conditions but not in all - for instance if you can only measure the height of a person in integer multiples of let's say feet (meters would be a little too extreme :-d) and you calculate the means over a REALLY large population the final result might still be surprisingly close to the means that you get from measurements at 1cm increments; however the standard deviation could be quite different (and that could also suggest a measurement limitation). On the other hand if you measure the height of a SINGLE person in increments of 1 feet *no matter how many times you measure/average it* the result will be (for like 99% of the people) a very constant number which will also be quite far away from the actual height measured in increments of 1cm!


----------



## South Pender

Quite true. All that is required really, with a discrete random variable, is that the intervals be of the same size at all points of the scale. So, are you saying that your interval size is something on the order of 16.66 ms.? If so, would your measurements be like: -33.33, -16.66, 0 +.166, +33.33? That is pretty coarse all right, but wouldn't a mean of, say, 10 of these produce far better data points than just one observation?


----------



## Catalin

South Pender said:


> Quite true. All that is required really, with a discrete random variable, is that the intervals be of the same size at all points of the scale. So, are you saying that your interval size is something on the order of 16.66 ms.? If so, would your measurements be like: -33.33, -16.66, 0 +.166, +33.33? That is pretty coarse all right, but wouldn't a mean of, say, 10 of these produce far better data points than just one observation?


At some point with a very, very large amount of extra data plus using the extra fact that I can estimate the (huge) drift of the computer clock and I can force a large number of NTP syncs plus eventually knowing the inhibition period and eventually assuming that we only talk about the calibers where the seconds-hand moves very fast (in under 10 ms.) we MIGHT be able to cut the errors to maybe 3-5 ms - the point however remains that it will be INCREDIBLY time-consuming - don't forget that already (with 17 watches) I was losing over 1 hour and we are now maybe talking about losing 1 hour for a single watch ... definitely <| in the current economy :-d


----------



## Catalin

*Re: Some extra quick-info on chrono-method vs video-method*



Catalin said:


> 1. Ideally you should try both and only after actually trying in parallel get to conclusions ;-)
> ...


Some side-by-side results can be seen at:

https://www.watchuseek.com/f9/some-observations-rate-checking-386187-post2926719.html#post2926719

and in the same post the second part at:

https://www.watchuseek.com/f9/some-observations-rate-checking-386187-post2932669.html#post2932669


----------



## webvan

*Re: Some extra quick-info on chrono-method vs video-method*

Wasn't sure where to post this, so here goes : for those who have an iPhone or an iPad there is a new free app called "Emerald Time" (preview link : Emerald Time for iPhone, iPod touch and iPad on the iTunes App Store) that samples the time from different NTP servers and shows the difference with the clock of the device. Unfortunately due to a limitation imposed by Apple apparently it can't set it like AboutTime does on a PC.


----------



## mrbill

To me, gain or loss is what is important. To get the relative accuracy of a given quartz watch I use a high end Junghans-Mega atomic watch for reference having a VERY quick second to second change, I reset it just prior to the check which I do daily at the SAME TIME for 4 to 7 days because I have found with certainty atomic watches are NOT necessarily any more exact 18 hours after resetting than many of my higher end quartz. To check the relative time of the 'subject' watch I mount it and the atomic side by side. I then use a Lumex F35 digital camera having video capability. I video both watches side by side for 10 to 15 seconds in the macro mode (any digital camera with video will probably do). The accuracy (with the F35) is 1/30th of a second (0.033 seconds). Counting the clicks (1/30 sec intervals) between the two watches gives me a reference point.

In one day, I can determine the gain or loss. However, it is from 0 to 0.033 seconds. You do this by re-recording and playing back the video on the camera and recounting the number of frames between the watches. With the F35,the following procedure must be used: when the atomic watch jumps a second, quickly pause the video - then back it up 1 'click' at a time until the atomic time jumps back -then do SINGLE clicks forward until the atomic watch jumps forward 1 second.

Lastly, single click until the quartz watch jumps 1 second COUNTING the clicks. 

On day 1 this is the starting point for checking not only the 'subject' watch at that point, but in say 5 days, you can get a daily gain the accuracy of which will depend on your subject watch. To get monthly gain or loss you have to note the number of clicks it is ahead or behind on the starting day and the exact atomic time. For instance, on the fifth day it takes 20 additional clicks before the subject watch jumps, you have to note if it jumped ahead of the original reference point 'caught up.' That is 20 times 0.033 seconds = 0.66 seconds per 5 days. This will be accurate to +/- 0.033 seconds Multiply by 6 to get a 30 day difference of 4.0 +/- 0.20 seconds which is very good for a quartz watch. For more 'accuracy' (if you feel you need it) just extend the number of days over which you measure the gain/loss. You can probably determine if it gained or lost visually, especially over a 5 day period. The better quartz watches as I'm sure most of you know is guaranteed +/- 15 sec/month. In my example if you divide the 0.66 seconds by 5 you will (of course) get the daily accuracy of 0.13 +/- 0.03 seconds...very good indeed. By the way most of my quartz gain from 3 to 12 seconds a month. The valjoux 7750 in my Invicta is outstanding running within the guaranteed quartz level of +/- 15 seconds WHEN WORN DAILY. Letting it unwind by not wearing it for a day makes it gain more which is what most spring powered watches do according to my EXCELLENT watch maker.


----------



## Haqnut

*Video panning of two separate time readouts.*

I haven't read this whole tread so pls forgive if 205. I was just considering the situation where one might not be able to have the two time readouts displayed together in the same frame eg one sweeps the recording camera from the watch readout to the digital readout on the pc monitor. If this were the case, I wonder if the individual video frames on most current digital cameras are time stamped from the onboard clock? If they were, then this would further allow for the calculation of the results using the camera's time base, at least for a few seconds whilst the video moves from one time recording shot to the next. Of course, there would be further errors, however small, introduced through this method. But its feasibility hinges on the availability of the single frame discrete time stamp generated by the video camera.


----------



## Eeeb

*Re: Video panning of two separate time readouts.*



overlandr said:


> ... But its feasibility hinges on the availability of the single frame discrete time stamp generated by the video camera.


... at time stamp that needs to be displayed with several decimal points. Don't think it exists.


----------



## Haqnut

*Re: Video panning of two separate time readouts.*



Eeeb said:


> ... at time stamp that needs to be displayed with several decimal points. Don't think it exists.


I haven't edited video frames. What time related information is normally stamped on each frame?


----------



## zeta

*Time server discrepancy*

I have just started to measure offset with the video method. Since my haq-watch have a 1/100 seconds hand (chronograph) i run this while filming allowing me to measure with an accuracy of about 1/30 second since the film has a frame rate of about 30 fps.









First I tried to sync windows 7 clock with time server (time.nist.gov), and showing the time with jbiclock (Index of /trstuff) but the offset varied wildly at first (at first -4/100 sec, then +27/100 sec during one day), so I assume that the time server syncronization wasn't consistent. Maybe because time.nist.gov is linked to many different time servers or because the lag from the servers is different every time?

It could also be a windows-problem, since when I synced with a server after having changed the windows-time to an erroneous time before syncing the resulting windows time became more accurate compared to when syncing without changing the windows-time first. Maybe because windows will not sync if the clock has not changed sufficiently? (Also, syncing with a time server too often (more than once every 4 sec) may get you banned...)

I have also tried using this time server webpage instead of the windows clock:

USNO Master Clock

This page turns out to work reasonably well compared to my windows clock tests. (only a lag of about -2.6/100 sec i 24 h, which is smaller than the margin of error for these videos anyway.) However, the webpage can only be used the first 5 seconds or so before the time-display on the webpage starts to lag noticeably, but it is enough to make a measurement.

Of course some webpages only copies the windows-clock time, but this page provably shows another time than the windows (synced) clock time when putting them next to each other.

I am unable to use this java-page, being stuck at "one moment please":

The official U.S. time - clock

It will only work if I disable java-animation, but then it is useless.


----------



## Eeeb

*Re: Time server discrepancy*

Might be a java problem. Suggest you move this to its own thread.


----------



## chris01

*Re: Time server discrepancy*



zeta said:


> First I tried to sync windows 7 clock with time server (time.nist.gov), and showing the time with jbiclock (Index of /trstuff) but the offset varied wildly at first (at first -4/100 sec, then +27/100 sec during one day), so I assume that the time server syncronization wasn't consistent. Maybe because time.nist.gov is linked to many different time servers or because the lag from the servers is different every time?
> 
> It could also be a windows-problem, since when I synced with a server after having changed the windows-time to an erroneous time before syncing the resulting windows time became more accurate compared to when syncing without changing the windows-time first. Maybe because windows will not sync if the clock has not changed sufficiently? (Also, syncing with a time server too often (more than once every 4 sec) may get you banned...)
> 
> I have also tried using this time server webpage instead of the windows clock:
> 
> USNO Master Clock
> 
> This page turns out to work reasonably well compared to my windows clock tests. (only a lag of about -2.6/100 sec i 24 h, which is smaller than the margin of error for these videos anyway.) However, the webpage can only be used the first 5 seconds or so before the time-display on the webpage starts to lag noticeably, but it is enough to make a measurement.
> 
> Of course some webpages only copies the windows-clock time, but this page provably shows another time than the windows (synced) clock time when putting them next to each other.
> 
> I am unable to use this java-page, being stuck at "one moment please":
> 
> The official U.S. time - clock
> 
> It will only work if I disable java-animation, but then it is useless.


Try this - https://docs.google.com/file/d/0B9GOrnGxIg2ESjE2dlA2ZDVyeEE/edit - download each file into a folder and run the .EXE but don't wait too long, as it's gradually disappearing from the web.

This is a utility that was prepared for the AtomTime watch, which turned out to be vapourware. You can read about it here: AtomTime

Unlike the watch, the program is rather useful, since it will sync your PC to external NTP servers and then gives a very steady readout. I used to use a Windows 7 Gadget analogue clock but found that this tended to stutter a bit. The AtomTime utility is much more stable. I use a GPS receiver attached to my PC, giving me a stratum 1 NTP server, but using remote servers (you can edit the program's list) seems pretty good.

I'm looking forward to seeing your DS-2 results to compare with mine.


----------



## webvan

*Re: Time server discrepancy*

Why not use Catalin's (hope he's ok BTW) excellent n.cmd/EarthSunX combo? Works really well although the synching is getting a bit slow these days, possibly some outdated ntp servers...EarthSunx shows miliseconds which is very useful for our purposes ;-)


----------



## chris01

*Re: Time server discrepancy*



webvan said:


> Why not use Catalin's (hope he's ok BTW) excellent n.cmd/EarthSunX combo? Works really well although the synching is getting a bit slow these days, possibly some outdated ntp servers...EarthSunx shows miliseconds which is very useful for our purposes ;-)


I agree it does a good job; milliseconds are essential for the video method. Personally, I think the UI implementation sucks and I can't live with it. A parallel test of the video and stopwatch methods showed me that the greater precision of video was of no advantage after 2 weeks' measurements and the (for me) easier operation of stopwatch was much better. However, for the latter method a ticking second hand is far more suitable as it means that you are watching two similar clocks. Seriously, if you might ever want to use the stopwatch method with a PC-based clock then grab a copy of the AtomTime utility while it can still be found.


----------



## webvan

*Re: Time server discrepancy*

Yes I got it but I'm probably not using it properly as it's (slightly) out of synch with n.cmd/EarthSunX (just drag your mouse to the red bar on the side to make it pop up) and Emerald Time on iOS. Actually I just switched from the (default uk NTP server) to the time.nist.gov NTP server and it's ok now.


----------



## webvan

*Re: Time server discrepancy*

Just switched to a new laptop and was desperately looking for n.cmd...finally found it here : https://www.watchuseek.com/f9/ezchronos-headcount-first-2011-project-start-498121-6.html#post3809997 - also hoping that all is well with Catalin!


----------



## ronalddheld

*Re: Time server discrepancy*



webvan said:


> Just switched to a new laptop and was desperately looking for n.cmd...finally found it here : https://www.watchuseek.com/f9/ezchronos-headcount-first-2011-project-start-498121-6.html#post3809997 - also hoping that all is well with Catalin!


You could have asked me as I just ran it for the monthly check.


----------



## webvan

*Re: Time server discrepancy*

Well I also have it on my old laptop, but I made it a point to find it in the maze of old message here...besides that gave me a chance to ping Catalin!

Have you noticed n.cmd taking more time to return a result/setting now too? I suspect some of the servers it uses have changed/are down.


----------



## ronalddheld

*Re: Time server discrepancy*

The script may need an update as return data has been slow for maybe a year.


----------



## igna

*Re: Time server discrepancy*

After reading this very interesting topic: molemans-hunt-milliseconds-168460.html

I realized a setup to determine the accuracy of of a watch by capturing the electromagnetic pulse and a reference signal.

The setup is very easy:

1) a microphone 
2) a telephone pickup coil (works fantastic well and cost ~2 $ )
3) a 2 female to 3.5 mm stereo jack adapter ( ~ 2$ ) 
4) an audio recorder and editor as: ocenaudio, audacity, goldwave (etc) ($ Free)

Then just access to time.is and enable sound ticks. (>> more>> sound) for the reference signal.
Method is as simple as record and measure time between pulses.

There are many details:


The reliable source time: Time.is maybe is not available in future. If this happen then we need another source. One can be our computer using NTP and a script beeping on every second. 
there will be some delay from the real zero millisecond to the sound beep. But as far I see is always constant and you can take the beep as reference point 
A watch can be several seconds away from correct time. This has to be determined by naked eye offset, and later use this method to figure out the intra-second difference. 
Once you have the pulses, the "Delta" is measured from the *beginning of reference pulse* to the *end of watch pulse,* because the end of EM-Pulse represents better when the second hand has moved.This is not very easy to determine, because there are many different pulse shapes. I just take the start of EM pulse because is easy to spot, and later add 10ms. 

Sure the method has many other pros and cons, but after using the stopwatch by more than 2 months i find this method much more precise and once you have the setup, making measures are very fast. Another advantage is you can save those records for future review. Pulse analysis can show malfunction and much more about our watch! (read *Hans Moleman* thread)

My contribution is the finding of the pickup coil as EM-Pulse detection and the explanation made here. All credit goes to Hans and all other great people writing in this HAQ forum.

Regards, Igna.

The image setup:


----------



## igna

*Re: Time server discrepancy*

2 videos showing the setup and how to check time difference using the EMpulse

This video uses a program using local time (w/NTP) to generate beep instead time.is. But show how easy is the setup and how fast you get the measurements.






The measurements in audacity:






Igna.


----------



## aaronhou_

*Re: Time server discrepancy*

thank all of you kind souls for this...


----------



## diablogt

*Re: Time server discrepancy*

I feel its too much work, also quartz watches are accurate as hell. Does a few seconds per year upset anyone?

On a auto watch on the other hand, the time can be off by 5-6s/day so its 30 minutes/year and thats substantial.


----------



## chris01

*Re: Time server discrepancy*



diablogt said:


> I feel its too much work, also quartz watches are accurate as hell. Does a few seconds per year upset anyone?


So don't waste your time. Move along, nothing to see here.


----------



## ronalddheld

chrisca70 said:


> Amazing info here! I love to tinker with stuff in fact I am currently taking an online watchmaking course.
> 
> Here is an alternative... although requires some sort of honor system and it has to be kept within the US or same country for practical matters. If 20 of us pitch in and put $60/each we can purchase a used Professional Witschi Quartz Analyzer on evilBay, the machine not only measures accuracy but can also be used to evaluate electrical and mechanical condition of our HAQ watches. I don't think one would spend more than 15 minutes per watch to fully diagnose it with this machine. So we would require a volunteer to be the machine master, and everyone else will ship their watch to have it diagnosed...However, you may have a friend watchmaker in town with this type of equipment that can diagnose the watch for you...
> 
> how to use it...


Many people here have good equipment. Have you read any of the current threads involving timing? Your post is really inappropriate here.


----------



## igna

So.. 60 each x 20 to buy a 1200 used machine. Then send a 5k Breitling (IE) by post to be tested by the "machine master" in 15 minutes and later send it back.

Too much risk and time for something it can be done at home with little effort.


----------



## ronalddheld

No more of the last two posts, please. I am leaning toward deleting them.


----------



## dwalby

anybody have any experience with this android app:

https://play.google.com/store/apps/details?id=partl.atomicclock&hl=en_US

I tried it out the other night and it seemed OK, used the frame-by-frame video method of observing when the second hand moves and got pretty consistent results. Not sure how reliable the milliseconds display really is though, anybody else used this??

The practical limitations are the update rate of the phone screen time display and the 60Hz video refresh rate of 16.67ms. I figure I'll use it on a weekly basis and look for 0.100 sec deviations. 0.1sec/week ~= 5spy


----------



## dwalby

I found another relatively simple method for sub-second accuracy measurement for anybody with Perl running on a linux computer. Same approach as the previous post, different apps.

the linux command "date +%s.%2N" returns the current unix time with two decimal places (or use a number other than 2 and you get that many decimal places)

this simple piece of Perl code will display the current unix time on your screen without scrolling:

while (1) ..
$x = `date +%s.%2N`;
chomp($x);
printf("%s^[M\n",$x); 
..

(note: the ^[ is created by ctrl-V, [ and the .. are really open/close curly braces, the forum editor changes them to .. for some reason)

then download a free video app for your phone that can play frame-by-frame video. A couple Android options are "slow motion frame video player" by ProFrameApps or "slow motion/frame player" by SOHO Nishikawa.

launch the Perl script, put your watch next to the screen display, and record a few seconds of video. Advance the video frame by frame until your second hand transitions, and note the decimal second value that is displayed on the screen for that video frame. Look at 2 or 3 consecutive seconds to get an average value of the offset, since if you shoot at 30fps you'll notice the time jump in roughly 0.03 second increments from one frame to the next, and at 60fps its half that.

I've used this method a few times and the unix time seems to be consistent with atomic time, so I think its fairly reliable. One caveat is the unix time that is displayed on the screen is the number of seconds since some base date, so its not modulo-60 like a typical watch, its modulo-100. Depending on when you start recording, there may be a multiple of 10 second offset between the screen seconds count and your watch second hand, but that shouldn't cause any confusion. For example:

unix display seconds offset relative to watch seconds 

0.00 (if you start at any multiple of 5 minutes) 
60.00 start at 1/6/11/16, etc. minutes
20.00 2/7/12/17, etc. 
80.00 3/8/13/18, etc.
40.00 4/9/14/19, etc.


edit: for the last couple posts I've been focused on UTC time sources that display sub-second resolution on a screen, so I somehow forgot about another method that's equally accurate and only requires a GPS app on your phone. Use the same video method, and count frames between the time when the GPS time display second changes, and your watch second indicator changes (or vice-versa depending on whether your watch is fast or slow relative to the GPS second). Since the video frame rate is the limiting factor in time resolution in either case, the GPS method requires nothing more than a simple phone app for the time reference.


----------



## wrest

Here goes relatively hassle-free video methode.

Requires iphone (just don't know if adnroid will also do).

TimestampCamera app (free, https://apps.apple.com/us/app/timestamp-camera-basic/id840110184 ) records video with timestamp accurate to 1 millisecond in _each frame_. That gives 1/30 sec accuracy.
That app can also record slomo, that gives up to 1/240 sec accuracy (depending on phone model). My phone supports 120 frames/sec.
Emerald time app (free, https://apps.apple.com/us/app/emerald-time/id290384375 ) tells iphone clock deviation from "atomic clock" via NTP, I believe with about 10 millisecond accuracy (though I don't have a proof of this value).
CMV Free app (free, https://apps.apple.com/us/app/cmv-slow-frame-frame-video-analysis-coachmyvideo/id499915119 ) can play recorded video frame-by-frame.

So, if one notes iphone clock deviation, adds it to watch deviation from iphone, add the two, can calculate watch deviation.
Accuracy would be of 10 milliseconds using slow motion mode (that accuracy is limited by iphone deviation accuracy, measured by Emerald time app, not by video recording), or 30 milliseconds using normal speed video.

The problem, though, is that with digital display watch, changing seconds on display takes time and you can see several frames with undefined seconds display (say 59 changes to 00 for 30-90 milliseconds). Analog display watch (physical second hand) has the same behavior: there're several frames between second hand goes from one rest position to the next, so it's up to you to decide when second has actually changed: either in the beginning of a hand movement or at the end. But having decided once and for all which moment you think is new second has come, you can then get consistent results.

Measuring watch deviation twice with 10 days gap with 15 millisecond accuracy, using 60 frames/second video (3 millseconds per day) gives you instantenious yearly rate with about 1 second per year accuracy.
Each measurement requires phone and watch only (and good Internet connection for Emerald time app), takes about 1 minute.

I beleive using phone is way better than using PC because phone's operating system (iOS) gives very high priority to foreground (on screen) app, so there's no unpredicted delays from multitasking.


----------



## wrest

dwalby said:


> method that's equally accurate and only requires a GPS app on your phone. Use the same video method, and count frames between the time when the GPS time display second changes, and your watch second indicator changes (or vice-versa depending on whether your watch is fast or slow relative to the GPS second). Since the video frame rate is the limiting factor in time resolution in either case, the GPS method requires nothing more than a simple phone app for the time reference.


Well, GPS is known to have nanosecond accurate (stable short-time, of course) clock in receiver/client circuitry (not that accurate by itself of course, but after syncronization with the satellite signal).
BUT! There's no evidence that a phone _has any access_ to that accurate clock. What I beleive, phone sees embedded GPS receiver as some external device and receives NMEA (or like that) text messages from it once a second. Accuracy of that is under question (messages with timestamp and coordinates are formed in GPS receiver, queued, waiting to being processed by phone etc). If you have a proof of otherwise, then it would be great to see it.


----------



## dwalby

wrest said:


> Well, GPS is known to have nanosecond accurate (stable short-time, of course) clock in receiver/client circuitry (not that accurate by itself of course, but after syncronization with the satellite signal).
> BUT! There's no evidence that a phone _has any access_ to that accurate clock. What I beleive, phone sees embedded GPS receiver as some external device and receives NMEA (or like that) text messages from it once a second. Accuracy of that is under question (messages with timestamp and coordinates are formed in GPS receiver, queued, waiting to being processed by phone etc). If you have a proof of otherwise, then it would be great to see it.


I have no technical info on the accuracy of the GPS app time display on a smart phone, but I've been using this method to track a HAQ watch for the last 5 weeks and it seems to be consistent and reliable. In other words, I look at the video frame offset between the GPS reported time second change, and my HAQ second hand tick, and track that offset in a spreadsheet. Its been consistently tracking at the same relative rate, so I think this method of tracking watch accuracy appears to be viable.

As far as the GPS reported time on the app is concerned, if I record a 5 second video interval I seem to get 3 or 4 seconds where the GPS second ticks over on the same frame number, and there's usually one outlier where the frame varies by 1 or 2 from the nominal. That is probably explained by app uncertainty, but if I use the most common frame number over the 5 second interval as the reference, then my HAQ accuracy always seems to be consistent over days and weeks.


----------



## wrest

dwalby said:


> As far as the GPS reported time on the app is concerned


What GPS app on what device do you use?


----------



## dwalby

wrest said:


> What GPS app on what device do you use?


android phone, free app is called "GPS Status & Toolbox" The time is given in the "last fix" part of the display and updates in real-time once per second.

https://play.google.com/store/apps/details?id=com.eclipsim.gpsstatus2&hl=en_US


----------



## wrest

dwalby said:


> android phone, free app is called "GPS Status & Toolbox" The time is given in the "last fix" part of the display and updates in real-time once per second.
> 
> https://play.google.com/store/apps/details?id=com.eclipsim.gpsstatus2&hl=en_US


I tried two android apps that supposed to tell deviance between internal clock and NTP/GPS
First is ClockSync https://play.google.com/store/apps/details?id=ru.org.amip.ClockSync
Second is GPS Time https://play.google.com/store/apps/details?id=net.sourcewalker.gpstime
Both tell three digits (milliseconds) difference.

And they differ up to one second. ClockSync gives consistent data (close and then re-run) while GPS Time does not.
Thus, I conclude that applications in android os just do not have access to accurate (down to milliseconds) GPS time.
Android version -- 7.1, some galaxy tab.


----------



## dwalby

yeah, I'm still experimenting and learning by observation which apps appear to be reliable and which are not, found similar results to yours. There seems to be some variation between the ones that claim millisecond resolution, so I've abandoned a couple of those for inconsistent results. One other thing that I've noticed is my Verizon phone time display isn't synched to GPS time, and the amount by which it deviates varies by network location. When I measured it at work its 1.7 sec off (slow) and at home its 0.5 sec off (slow). But from what I read, cellular networks should use GPS for time reference, so I haven't been able to explain that discrepancy unless its something in the phone itself. When I check that GPS app against a radio-controlled clock and a unix network time script I run on my work computer, they all seem to be in sync visually, so the phone time seems to be the odd one out, but I'm not sure why. I haven't compared the GPS app against the unix network at sub-second resolution, I may try that just to see what I find. 

But, for measuring watch accuracy, that GPS app I referenced appears to be reliable over the last month and a half. It constantly reports my VHP as being fast by a rate of about 0.5 sec/month, or 6 spy. Given the resolution of the measurement (1/60 sec) if it were fluctuating by much I'd expect to see less consistent results, so it seems to be my go-to app for now and is as good as I'll ever need for that application.


----------



## wrest

dwalby said:


> the amount by which it deviates varies by network location. When I measured it at work its 1.7 sec off (slow) and at home its 0.5 sec off (slow).


That's because android operating system doesn't use NTP for time sync (why? that's mysterios for me).



dwalby said:


> But from what I read, cellular networks should use GPS for time reference, so I haven't been able to explain that discrepancy unless its something in the phone itself.


They do use it to keep sync in-between cell towers. But for some reason, they don't (or can't) provide that to user devices (I don't know why). So it's not your phone, it's your service provider.



dwalby said:


> But, for measuring watch accuracy, that GPS app I referenced appears to be reliable over the last month and a half.


That might depend on device manufacturer or android version. Maybe polling GPS in your phone occurs more frequent than once per one second, for example. It works for you, but it is not universal. Universal is dedicated GPS- or radio- controlled device, such as radio-controlled watch\clock.

What does not depend on mobile device and it's operationg system is NTP software, it shows you not just real time but also uncertainty so you can trust it, especially if it inquires multiple servers, chooses best packet jitter etc.


----------



## dwalby

Here's a little more data to throw into the mix. I video recorded my NTP unix script at work along with my GPS status and toolbox app and found that over a 5 second interval the GPS time reported varied from 0.04 to 0.10 seconds from the NTP time reported. That 60ms differential could be due to app latency, as I've noticed a similar thing when calibrating my watch offset with the GPS app (GPS app second tick changes on slightly different 60fps frame count from one second to another). 

Oh yeah, as I'm typing this I just realized I had the camera set to 30fps mode, as I'd used that mode for a previous recording. I'll re-try tomorrow with 60fps and report back. I suspect that will give slightly more consistent results, and will also show day-to-day variation between the GPS app and NTP, if any.

BTW, I checked the reported NTP offset prior to the test and it was < 1ms.


----------



## wrest

dwalby said:


> BTW, I checked the reported NTP offset prior to the test and it was < 1ms.


Good, though I beleive NTP uncertainty is around 10ms, still those 10ms is quite real, achievable accuracy.


----------



## wrest

dwalby said:


> my GPS status and toolbox app and found that over a 5 second interval the GPS time reported varied from 0.04 to 0.10 seconds from the NTP time reported.


Well, I also tried GPS status app.
It's quite consistent and showed -100..170ms (maybe you're right and that's an app latency). It means that GPS time display on a phone heavily depends on the app used for it. Perhaps GPS status app among those good apps...


----------



## dwalby

wrest said:


> Well, I also tried GPS status app.
> It's quite consistent and showed -100..170ms (maybe you're right and that's an app latency). It means that GPS time display on a phone heavily depends on the app used for it. Perhaps GPS status app among those good apps...


its not clear from this comment what you were using as your other time reference, please clarify.

BTW, dunno if its because I'm using my network at work and its high performance or what but here's my ntpq -p report:

delay offset jitter
==============================================================================
0.472 -0.735 0.334
0.423 -1.055 0.911
0.490 -0.984 0.474

So I'm within 1ms it appears with my NTP clock. The script I run to display the NTP time also has some small latency when it runs, and since the video camera isn't fast enough for 1ms resolution I use 10ms resolution in my NTP time display. I'm recording it again now, but won't be able to post the results until later. I have to be at home to do a frame-by-frame on my video editor to measure the offsets, I don't have that capability at work. Friday nights are busy, it may be tomorrow AM by the time I get to that.


----------



## wrest

dwalby said:


> its not clear from this comment what you were using as your other time reference, please clarify.


Video with timestamp in each frame, taken on iPhone with preliminary checking it's system clock on NTP being around 10ms deviation (it shows 1ms but I don't beleive it).
See post #47 of this thread for details https://www.watchuseek.com/f9/methods-determining-accuracy-watch-382752-5.html#post50099355


----------



## wrest

dwalby said:


> here's my ntpq -p report:
> 
> delay offset jitter
> ==============================================================================
> 0.472 -0.735 0.334
> 0.423 -1.055 0.911
> 0.490 -0.984 0.474
> 
> So I'm within 1ms it appears with my NTP clock.


I'm not quite familiar with ntpq output, but being in 0.5ms delay (if it actually is in milliseconds) means being in less than 75 km range from NTP server ("delay" supposed to be roundtrip), and it's not taking in account routing hop delays.
But if delay (RTT) is in seconds, it's ok. But RTT is quite long then.


----------



## dwalby

wrest said:


> I'm not quite familiar with ntpq output, but being in 0.5ms delay (if it actually is in milliseconds) means being in less than 75 km range from NTP server ("delay" supposed to be roundtrip), and it's not taking in account routing hop delays.
> But if delay (RTT) is in seconds, it's ok. But RTT is quite long then.


delay: indicates the roundtrip time, in milliseconds, to receive a reply

offset: indicates the time difference, in milliseconds, between the client server and source

disp/jitter: indicates the difference, in milliseconds, between two samples

when I googled the IP address of the NTP server it said its time reference was GPS, but didn't mention its geographical location. I'm guessing that its a lot easier to use GPS as a time reference anywhere in the world than to use an actual NIST reference from Ft. Collins CO over one of their servers. Don't really know much about this though, just learning as I'm going.

Also, since the video rate is about 16.66ms anyway, a few ms here and there aren't going to make any difference in the scheme of things. With 1ms of prop delay being 300km, I'm guessing that wherever my server is located its within a few ms in terms of prop delay. The basic exercise is to see if I can get a time source that's reliably accurate and consistent to within a few 10's of ms for measuring watch performance, and so far my current methodology appears to be capable of achieving that.

We're talking about 5-10spy rates for HAQ, which is 13.7-27.4ms/day. So something with 50ms accuracy and consistency should be able to show a timekeeping trend of any HAQ watch after a week or two, and have a very good estimate after a month.


----------



## wrest

dwalby said:


> I'm guessing that its a lot easier to use GPS as a time reference anywhere in the world than to use an actual NIST reference from Ft. Collins CO over one of their servers.


For GPS, you need a receiver with dedicated 1pps signal output and you need an open view to the sky. And you need a way to properly use that 1pps signal.

For NTP, you just need the Internet. So NTP is definitely easier 
Plus: NTP gives you not just accurate time, but possible bounds of error -- the jitter value in ntpd output (confidence interval).



dwalby said:


> We're talking about 5-10spy rates for HAQ, which is 13.7-27.4ms/day. So something with 50ms accuracy and consistency should be able to show a timekeeping trend of any HAQ watch after a week or two, and have a very good estimate after a month.


That's correct. 2-4 weeks is more than enough.


----------



## dwalby

OK, checked the 60fps video tonight and it was pretty consistently at 60ms difference between the NTP time and the GPS time reported on my phone app, with the phone lagging. A couple of the readings went as high as 100ms, but they were few (2/10) and one was 40ms. So this seems consistent with the previous day's result with the 30fps video, and based on my tracking of my watches using the GPS app, I'm satisfied that its accurate and consistent enough for that application.


----------



## Davismi

South Pender said:


> Not quite true. If the error associated with a single timing at the beginning of the 24-hour period is .04 seconds, as you have stated, it follows that there is also a similar error associated with the single timing at the end of the 24-hour period. The change (or drift) estimate relies on both measurements. Thus, the total error in the assessment of drift--by which you estimate accuracy--is about .06 seconds, or about *22 spy*. This is a perfect illustration of why attempting to estimate _spy_ using a one-day timing period will necessarily be far too imprecise.
> 
> However, your assertion of an error bandwidth of .04 seconds accounts only for camera error. What about clock error? This must be factored in, as noted in my just-preceding post. Once we do this, we get a single-measurement bandwidth of about ± .06 seconds, and this applies at each measurement point. Since two measurements are required to evaluate drift (that is to get a _spy_ value), the actual error bandwidth of the change, or drift, calculation will be about ± .08 - .09 seconds. Given this fact, any estimate of _spy_ using a single observation at each time point with the Video Method will produce a total error bandwidth of about ± 31 seconds for a one-day assessment of _spy_, and about ± 4.5 seconds for a _spy_ estimate based on a one-week time period--that is, taking the offset from the atomic clock at Time 1 and then again at Time 1 + 7 days and using the change between the two as an estimate of drift, prorated to an annualized _spy_ value.


I found the easiest way to check accuracy is to put the watch next to a standard such as a atomic clock and take a picture. Over several months this works really well.


----------

