January 2, 2007
I have something of a clock fetish. My latest acquisition is a nixie tube clock from my wife, as a Christmas gift.
My computers aren't just giant calculators, they're also clocks. Unfortunately, my nixie clock is a much more reliable timekeeper than any of my PCs are.
PCs aren't very accurate timekeepers. The distribution of times reported here is a little disturbing, as are the giant peaks on the extreme left and right of the graph. The PCs with wildly inaccurate clocks outnumber those with accurate clocks about 2:1.
|PCs with correct time (+/-5 sec)||~3000
|PCs whose internal clocks are more than 8 minutes off||~7000
You certainly won't mistake PCs for atomic clocks any time soon. I've noticed that my Media Center PC in the living room is losing a lot of time. It's frequently a minute or more off, even with internet time synchronization turned on in the Windows control panel.
Right now it's fairly accurate, but Windows just performed its internet time sync. Normally you may not care if your PC's clock is off by 5 seconds or even a few minutes. But clock accuracy is important for a PC designed to record television shows that start and stop at specific times.
One way to "fix" a skewed PC clock, at least one that's connected to the internet, is to have it synchronize often with a reliable internet time source. Unfortunately, there's no visible UI in Vista or XP to change the synchronization schedule. MSKB article Q223184 appears to have a frequency setting, but this only applies to computers on a domain. On a domain, clients time sync with the domain controller-- a dedicated server. Of course, servers are still PCs, so their clocks aren't any more accurate than the one inside your desktop. However, servers tend to be synchronized much more aggressively with authoritative time sources. Compare this graph of observed webserver times to the one I presented earlier:
My computer isn't on a domain. Browsing around the registry keys, I found a
SpecialPollInterval setting under the
W32Time\TimeProviders\NtpClient key which looked promising. I did a web search and found this worldtimeserver.com page which confirms my finding. I changed the setting, stopped and started the w32time service, and it worked. The same page also describes how to add more NTP time server sources through the registry or at the command line. So my clock drift problem is solved, for the moment.
But this fix only addresses the symptom, not the problem itself. Why are PC clocks so inaccurate? Part of it is by design. An extremely accurate real-time clock isn't necessary for your PC to function, and adding one would probably add cost that OEMs like Dell, HP, and Apple don't want to bear. Most manufacturers opt for the "good enough" solution:
The real-time clock (RTC) built into most machines is far from reliable. Unless its battery dies or it encounters a Y2K problem, it does a fairly good job of remembering the time while the computer's power is turned off -- as long as you don't leave the computer off more than several hours, and don't care if the clock is wrong by a minute or two...or three...or more. The resolution of most PC real-time clocks is one full second, and most RTCs drift considerably over time. It is not unusual for an RTC to gain or lose several seconds or even minutes a day, and some of them -- while still considered to be operating correctly by the manufacturer-- can be off by an hour or more after a week or two without correction.
To be fair to the manufacturers, the real-time clock inside your PC is good enough for most purposes. One research study (pdf) corroborated this conclusion:
A typical accuracy of 35ms with respect to the UTC scale is attainable from almost any PC connected to the internet. This performance can be considered adequate for the vast majority of real-time data acquisitions, even in professional applications.
PC clocks should typically be accurate to within a few seconds per day. If you're experiencing massive clock drift-- on the order of minutes per day-- the first thing to check is your source of AC power. I've personally observed systems with a UPS plugged into another UPS (this is a no-no, by the way) that gained minutes per day. Removing the unnecessary UPS from the chain fixed the time problem. I am no hardware engineer, but I'm guessing that some timing signal in the power is used by the real-time clock chip on the motherboard.
There is an entire class of software problems, bugs, and exploits involving the system clock. Whether it's set to the wrong time, or it's drifting too quickly or slowly, the results can be unexpected or possibly painful. Here are a few I can think of offhand:
- You can't sync your clock with a NTP source if the clock is already too far out of date. How ironic.
- Some versions of Windows will fail during the setup phase with a cryptic error if the clock is set to a very old date.
- Kernel hacks can speed up or slow down the clock to facilitate cheating in online games, as related in this article. I remember this exact hack happening in the original Counter-Strike; there was suddenly a player on the map running around at breakneck speeds, gunning everyone down before they could respond.
- Some encryption techniques and login mechanisms (Kerberos) will fail if the system clock is too far out of sync.
- A recent Vista activation hack involved setting your system's date back in the BIOS prior to install.
- It's theoretically possible to attack servers by measuring their clock skew. I'm extremely skeptical of this particular attack, but clock skew is an interesting fingerprint.
I haven't even touched on the tricky issue of synchronizing events between PCs, each of which will have their own idea of what time it is, and how fast time is advancing. This can lead to some problems, as noted in the NIST document Configuring Windows 2000 and Windows XP to use NIST TIme Servers (pdf):
The time clock in the computer is used to keep track of when documents (files) are created and last changed, when electronic mail messages are sent and received, and when other time-sensitive events and transactions happen. In order to accurately compare files, messages, and other records residing on different computers, their time clocks must be set from a common standard. It is particularly important that computers that are networked together use a common standard of time.
We tend to think of time as an absolute, a universal interval that is the same everywhere. But inside the PC, time is a malleable material. We can go forward into the future, back into the past, or even change the rate of time's passage. This is something that's easy to forget when you're developing software, and it can definitely come back and bite you.
Posted by Jeff Atwood
I find hwclock a fun command to run on linux systems, as it's a good diagnostic to see if the cmos battery is failing. My clock itself is -0.214587 seconds behind, which seems like a lot.
But as for the media centre pc, there is always the issue if *their* computers keeping the right time :P. They probably don't have the same sort of issue XP has with time.
I haven't seen clock drift personally, but as an issue with email; trusting the client's reported time in sending email has led to embarrassing newsgroup postings that don't show up right when sorted by time received. email itself is entirely reliant on client-reported times :/
This still doesn't answer what five things we don't already know about you Jeff. Cough it up man.
The RTC in a PC uses a crystal to keep time. But I guess extreme temperature variations and electromagnetic interference have more to do with clock skew than cheap chips. Your wrist watch almost never loses time because your body keeps it at a constant temperature.
It might be possible to create a USB/Serial dongle to sync the PC clock using good quality crystal (or just feed output of your nixie clock :)
This is another thing that should not be relied on. For example, network can't guarantee certain speed or latency or even connection itself. Likewise, system clock should not be expected to be precise or accurate, have constant speed relative to other systems or always incrementing (time can leap back any moment, e.g. when it is synchronized). If you need atomic clock precision and stability, use atomic clock.
By the way, think about clocks on satellites. Not only they are very distant from Earth and another satellites, you even have relativistic effects there.
Suraj wrote: The RTC in a PC uses a crystal to keep time.
That's the heart of the problem: Typical quartz crystals have manufacturing tolerances of between 10ppm (30 secs/month) and 100ppm (5 mins/month!). To improve this, board designers often fit a trimming capacitor which can be adjusted manually.
But, those adjustable caps cost money to fit and time to set up. I expect many mobo manufacturers simply don't bother to do this.
As for deriving timing from the AC supply frequency, I think that's pure FUD. I don't expect this technique has been used with *ANY* computers in the last 20 years. Remember, your clock keeps running even when your computer is powered off - and that PCs have universal supplies that work from 100-230VAC, 50-60Hz...
My time nightmare happened back in the WinNT 4.0 days.
I and a coworker were trying to delete a server out of the PDC on the domain. We were planning on building it back with the same name and put it back in. Well we would blow the dead server away, force a sync, give it 15 minutes to make sure it was gone across the whole domain.
Then at about the 30 minute mark the dead server name would show up again. It took about 7 rounds and 4 hours till we figured out what was going on.
A little noticed/used BDC at our recovery location was off by 32 minutes. What would happen is that it would sync up our local servers, but the off site BDC when it got the message wasn't deleting the dead server. Then when the PDC would pass the 32 minute mark the BDC was re-adding it back. Once we did a net time on the BDC our problems were solved.
Instead of using W32Time service (and using SNTP), you could make your computer a NTP server, which updates the software clock periodically, measures clock drift and _proactively_ steps the clock accordingly.
Meinberg offers an easy-to-install, open-source NTP server and monitor/statistic software for Windows (NT-based), which is a port of the de facto standard NTP/XNTP reference software.
It works well, usually within 0.1 second of a NTP stratum 2 (publically available) server if your connection to the Internet is stable. You also get nice graphs that tell you how your computer is drifting.
It may not be atomic clock goodness, but obviously your tolerance for clock drift is pretty high. This is one step before buying your own clock hardware.
I don't know about how the manufacturers build 'em in PCs, but if my 10 Casio can keep good time, I really don't see why my 1500 PC can't.
"You can't sync your clock with a NTP source if the clock is already too far out of date. How ironic."
Actually, you can, it just takes an ungodly long time to go from 1980 to 2007. (Or does windows simply prevent this after all?)
I haven't had a really overloaded system in years now, but I remember back when I had the 486 that lots of clock skew would occur anytime a high priority (kernel?) process sat on the cpu; the bios or kernel was missing time signal interrupts.
I happen to have a Nixie tube sitting in my cube as decoration that I kept when I worked for MSOE, and one of my jobs was to tear apart old electronic equipment for disposal. I thought it was pretty cool, and have kept it every since.
Point being... where did she find that clock?! I want one!
Where did your wife find that nixie clock? It is certainly one of those desktop items that will get lots of stares and comments. Very hip...in that 1899 sort of way.
I remember a discusion between old time mainframe programmer where on some of the early 360 systems there was a button that allowed the time clock change instruction to run. Someone would get the program queued up and then hold the button down while starting the program. They would typically be shifting the clock from its current time by a couple of microseconds. There was a lot of concern about file time stamps and log entries that were around the time change event in the event that the clock was moved back.
From my signature file:
A man with a watch knows what time it is. A man with two watches is never sure.
We got our Nixie clock from Peter Jensen, at http://www.tubeclock.com/
As for deriving timing from the AC supply frequency, I think that's pure FUD.
OK, then how do you explain the massive clock skew of a PC connected to a UPS that's connected to a UPS? And why, when we removed one of the UPSes from the chain, the clock skew cleared up?
So buy three watches, and always have a very good idea.
How many of those 500+s fast/slow connections in the first graph are from machines with clocks deliberately set incorrectly? At that same website there's a graph that goes out to +/- 2000s - there are practically *no* machines in the 500s-2000s range - they're all greater than 2000s clock "skew". I suspect nearly all of these machines are *deliberately* set wrong.
How do you explain ... why, when we removed one of the UPSes from the chain, the clock skew cleared up?
Well, let's start with how the PSU and the RTC 'communicate'. There's really only one connector: http://www.playtool.com/pages/psuconnectors/connectors.html#atxmain24
and it provides only *DC* power to the mobo. No frequency component at all, apart from a little bit of ripple http://en.wikipedia.org/wiki/Ripple_%28physics%29 - that PSU designers do their damnedest to remove.
The only other signal to the mobo is a PWR_OK line which is just an "I'm OK" signal to the CPU.
So, the RTC can not be 'designed' to take any timing information from the AC supply.
I suppose it's remotely *possible* that two UPSes in series could degrade the AC supply somewhat, and thus make the PSU fail to deliver stable DC to the mobo, but if this was happening I'd expect SERIOUS grief - not just a clock 'speedup'. DC levels would be borderline or have spikes and/or dropouts in them. If this was the case, I'd be amazed that the system ran at all.
While I don't dispute what you observed, I can't see any rational explanation for it, and so don't agree that "the first thing to check is your source of AC power".
Woah, those clocks are not cheap.
I've been thinking in passing of getting one for a few years, but not at those prices.
Here's a good article on the history of timekeeping from the perspective of navigation. It's wonderfully written and I'd recommend it to anyone curious about modern timekeeping.
Today most people take for granted that we can determine our exact location on Earth to within a few feet, but this wasn't possible until recent decades because we didn't have clocks that were accurate enough to do the job. Not only are computers giant clocks, but so are GPS satellites, and if you've got a boner for organizing your life around UTC time within a millisecond, a nice GPS receiver can get you near-atomic-clock accuracy.
Also, no hardware uses AC power to keep time. In the US, the power company guarantees that your wall power will have a frequency of 60 Hz as a daily average, and that only applies to the power delivered to the building. If you've got a bunch of PCs running in your room, the quality of the sine wave coming out of the wall is probably pretty miserable.
I'm also not a hardware design expert, but I believe UPS act as a low-pass filter for the incoming power signal and any data at 60 Hz should get through untouched. What you observed was likely caused by something much, much more elaborate and had something to do with the interactions of the PC power supply with the UPS system.
For the EE nerds out there here is a detailed explanation of the time accuracy from manufacturer of the chip many (most?) computers use for their RTC: http://www.maxim-ic.com/appnotes.cfm/appnote_number/58
A quick summary is that the accuracy is dependent on the tolerances of the crystal and a joined capacitor. Once installed the crystal is sensitive to temperature extremes and electronic noise. I expect Jeff's two UPSs were causing the AC to be very noisy at a high frequency and interfering with the RTC chip.
The funny thing is everyone is posting how bad it would be to use the 60Hz AC frequency to tell time, but it's been used for years in electric clocks. In fact, Jeff's new nixie clock uses the AC signal as it's time signal. From the manual for the kit version (looks like fun): "The power line provides a very reliable clock source, courtesy of the power company".
How accurate is your Nixie clock Jeff?
The time may drift slightly on a PC, but we are talking a few seconds a year and in a constant direction--it would NEVER jump around like that web site seems to make us think it does. That is all variations in networking.
I'm not saying PC clocks are perfectly accurate, just that the drift isn't something you can measure at a web site--you'd have to test two times that were weeks or months apart to get even a second worth of drift.
why is it that everything you do almost inevitably is on Windoze?
only sometimes do the comments touch on alternative operating systems
Just wanted to add something that I don't think anyone else has covered here; the Time subsystem in Windows XP is separate from the RTC chip - It's appears to me that it may well be a Kernel Level 0 driver (i.e. software) that runs on a PID algorithm, using the time from the NTP servers as a set point, and then using PID controls (often found in manufacturing to ensure accurate analogue positioning of pistons, for example) to resync the clock in a predictable manner;
e.g. If your clock is running behind, Windows will interpret each second as perhaps 900ms so that more "seconds" will pass per real-time seconds to get caught up. Similarly, if your clock is too fast, Windows will perhaps make each "second" count as 1.1s, thus slowing your clock down.
Because of this algorithm, it can actually take several minutes to re-sync a clock in Windows XP; and because of the limitations of PID control, it would wreak havoc on timings if your clock is too far away. (Because the proportional control in the PID algorithm becomes too large and too innacurate - Having a kernel clock, like you mentioned, of 3:1 (3 "seconds" to one real second) would no doubt mess up anything which requires proper timing; especially games.
Windows appears to sync it's own Time subsystem to the RTC chip every so often to ensure no data loss on reboots; the RTC chip runs the clock (as you say, pretty inaccurately) until Windows boots and then the time subsystem takes over from the RTC.
The algorithm, although complex, is far better than most OSes, probably because to ensure security on domain logins, Microsoft require an accurate clock.
Wikipedia has an excellent article on PID algorithms if you wish to research this more. :)
What you observed was likely caused by something much, much more elaborate and had something to do with the interactions of the PC power supply with the UPS system.
Fair enough; as I said, I'm no hardware engineer, but I definitely observed the power input messing up the internal PC clock. The drift was severe, too, on the order of 5-10 minutes a day.
the Time subsystem in Windows XP is separate from the RTC chip
There's a good description of how Windows manages time here.. I assume it's the same as most other operating systems:
So buy three watches, and ...
... get picked up on suspicion of selling stolen goods.
You learn something new from this blog everyday. Though I dont trust that windows time thingy. I ussually switch that off. I just manually change the time every so often. There's so many dumb automated features intergrated into windows that I expect them to cause problems so I disable them all.
I've had a time fetish for years too. Thanks for the clock links all, I've ordered one. For those who share the interest in time, here are two books that I recommend:
_Splitting_the_Second:_A_Story_of_Atomic_Time_ by Tony Jones
_Time's_Pendulum_ by Jo Ellen Barnett
For a media centre PC, I've found the best source of time sync to be the time signal from a DVB tuner. You don't need an NTP server or even a network connection (you can get your schedule data from the DVB stream as well.) And of course, it's the perfect benchmark of time for a system which has it's entire world centred on TV. My HTPC exclusively uses the dvbdate utility (don't know if there is an equivalent for Windows MCE, I'm on Linux/MythTV).
This is also a problem if you're using WS-Security extensions, as messages will be rejected if there's too much of a clock difference between client and service. You can set the time tolerance, but only if you control both machines...
I don't see how AC frequency could affect a PC's clock, since the power is regulated to DC long before it hits any internal component (and the CMOS battery is a different circuit anyway). The reason you're not supposed to daisy chain a UPS is that most UPSes use a step-approximated sine wave when operating from battery, so the batteries on both of them will discharge at the same time and you'll be no better off than having just one UPS. Or worse, if you have two different kinds of UPS (i.e. one inline, one standby), one might interpret the stepwave as an overload condition and start discharging its battery immediately. I believe that could damage the equipment under some circumstances.
Either way, it's unrelated to the frequency. Probably the reason for the added clock skew is that since the UPS has to regulate the AC power, you end up with a slightly lowered voltage at the output, and daisy-chaining enough standby models could create a noticeable sag by the time it gets to the PC. At least, that's my untested cocktail-napkin theory. It sounds more plausible than frequency dependency!