Does anyone know what the original reasoning was for Unix timestamps to not account for leap seconds? So that one timestamp can actually point to two physical times a second apart?
I mean, I know leap seconds aren't scheduled, and it's convenient to find a day by dividing by 86400, but it really seems like "physical seconds since the epoch" is the "fundamental" amount of time, as opposed to physical days, and the function that calculates datetimes (including time zones and DST) could just handle the leap seconds too.
It's obviously not changing now, was just wondering about the historical context of it. It seems like Unix time and leap seconds both come from the beginning of the 1970's... was Unix time defined before the concept of leap seconds were?
> Because the Earth's rotation speed varies in response to climatic and geological events, UTC leap seconds are irregularly spaced and unpredictable. Insertion of each UTC leap second is usually decided about six months in advance by the International Earth Rotation and Reference Systems Service (IERS)
I don't see how applications could make unix second <-> day conversions without downloading a map from the IERS, if leap seconds were included.
I was disappointed to read that their solution didn't involve switching to TAI. The kernel should use seconds since epoch and leap seconds should be a user space issue, just like timezones.
How is handling a leap second any different from dealing with Daylight Savings Time, when a whole hour can skip or repeat itself? Wouldn't you just use the same logic?
Or is it the fact that servers tend to ignore DST, being set to GMT and using timezone+DST only for datetime rendering/parsing, like Unix timstamps? While leap seconds actually affect the clock itself?
Yep, it's the latter. Leap seconds actually affect time_t values, whereas daylight savings does not.
I think it's simpler to think of time_t (or "unix time") as independent of any time zone. It's the number of seconds since an arbitrary "epoch" that happened simultaneously everywhere in the world. It so happens that the epoch happened at midnight GMT.
Of course it's not literally the number of seconds since the epoch because of leap seconds.
It seems to me that the problem is in all kinds of code that relies on time to do some critical operation when it really should not. Time is for people.
Computers should use a separate "time", that only moves forward. A numbered pulse.
Is that part of google's "TrueTime" project? I heard about it at a Google Spanner presentation. They use GPS receivers in their DCs to get an exact time.
smart. Spoiler/summary: they use a "leap smear" to keep code logic from breaking.
Instead of making code encounter the same second twice or not encounter a certain second, they smear the extra second over several hours beforehand through the central time server; by the time the leap second comes you're already sufficiently ahead/behind. (My comment: This works because the granulatiy of the time isn't that low anyway, so obviously no code can rely on it. Therefore, if code is correct without the smear it will be correct with the smear.)
[+] [-] crazygringo|13 years ago|reply
I mean, I know leap seconds aren't scheduled, and it's convenient to find a day by dividing by 86400, but it really seems like "physical seconds since the epoch" is the "fundamental" amount of time, as opposed to physical days, and the function that calculates datetimes (including time zones and DST) could just handle the leap seconds too.
It's obviously not changing now, was just wondering about the historical context of it. It seems like Unix time and leap seconds both come from the beginning of the 1970's... was Unix time defined before the concept of leap seconds were?
[+] [-] jwm|13 years ago|reply
> Because the Earth's rotation speed varies in response to climatic and geological events, UTC leap seconds are irregularly spaced and unpredictable. Insertion of each UTC leap second is usually decided about six months in advance by the International Earth Rotation and Reference Systems Service (IERS)
I don't see how applications could make unix second <-> day conversions without downloading a map from the IERS, if leap seconds were included.
[+] [-] aaronsw|13 years ago|reply
[+] [-] nas|13 years ago|reply
[+] [-] crazygringo|13 years ago|reply
Or is it the fact that servers tend to ignore DST, being set to GMT and using timezone+DST only for datetime rendering/parsing, like Unix timstamps? While leap seconds actually affect the clock itself?
[+] [-] haberman|13 years ago|reply
I think it's simpler to think of time_t (or "unix time") as independent of any time zone. It's the number of seconds since an arbitrary "epoch" that happened simultaneously everywhere in the world. It so happens that the epoch happened at midnight GMT.
Of course it's not literally the number of seconds since the epoch because of leap seconds.
[+] [-] wmf|13 years ago|reply
Yes.
[+] [-] DanielRibeiro|13 years ago|reply
A much longer discussion, but on different link, we had 16 days ago: http://news.ycombinator.com/item?id=4112002
[+] [-] agf|13 years ago|reply
[+] [-] dfc|13 years ago|reply
"UTC with Smoothed Leap Seconds (UTC-SLS)": http://www.cl.cam.ac.uk/~mgk25/time/utc-sls/
[+] [-] kunalmodi|13 years ago|reply
[+] [-] lukeschlather|13 years ago|reply
[+] [-] crazygringo|13 years ago|reply
[+] [-] nodrama|13 years ago|reply
Computers should use a separate "time", that only moves forward. A numbered pulse.
[+] [-] unknown|13 years ago|reply
[deleted]
[+] [-] raldi|13 years ago|reply
[+] [-] shalmanese|13 years ago|reply
[+] [-] kentonr|13 years ago|reply
[+] [-] anothermachine|13 years ago|reply
[+] [-] brugidou|13 years ago|reply
[+] [-] harlowja|13 years ago|reply
[+] [-] toemetoch|13 years ago|reply
[+] [-] jbyers|13 years ago|reply
[+] [-] its_so_on|13 years ago|reply
Instead of making code encounter the same second twice or not encounter a certain second, they smear the extra second over several hours beforehand through the central time server; by the time the leap second comes you're already sufficiently ahead/behind. (My comment: This works because the granulatiy of the time isn't that low anyway, so obviously no code can rely on it. Therefore, if code is correct without the smear it will be correct with the smear.)