top | item 33341652

Time is an illusion, Unix time doubly so

228 points| nickweb | 3 years ago |netmeister.org

99 comments

order

seanc|3 years ago

From the GNU coretutils doc [1]:

Our units of temporal measurement, from seconds on up to months, are so complicated, asymmetrical and disjunctive so as to make coherent mental reckoning in time all but impossible. Indeed, had some tyrannical god contrived to enslave our minds to time, to make it all but impossible for us to escape subjection to sodden routines and unpleasant surprises, he could hardly have done better than handing down our present system. It is like a set of trapezoidal building blocks, with no vertical or horizontal surfaces, like a language in which the simplest thought demands ornate constructions, useless particles and lengthy circumlocutions. Unlike the more successful patterns of language and science, which enable us to face experience boldly or at least level-headedly, our system of temporal calculation silently and persistently encourages our terror of time.

… It is as though architects had to measure length in feet, width in meters and height in ells; as though basic instruction manuals demanded a knowledge of five different languages. It is no wonder then that we often look into our own immediate past or future, last Tuesday or a week from Sunday, with feelings of helpless confusion. …

—Robert Grudin, Time and the Art of Living.

[1] https://www.gnu.org/software/coreutils/manual/html_node/Date...

photochemsyn|3 years ago

For even more fun, take a look at this excellent post on GPS, and in particular, the time problem:

https://ciechanow.ski/gps/#time

> "When it comes to the flow of time on those satellites, there are two important aspects related to Einstein’s theories of relativity. Special relativity states that a fast moving object experiences time dilation – its clocks slow down relative to a stationary observer. The lower the altitude the faster the satellite’s velocity and the bigger the time slowdown due to this effect. On the flip side, general relativity states that clocks run faster in lower gravitational field, so the higher the altitude, the bigger the speedup is."

> "Those effects are not even and depending on altitude one or the other dominates. In the demonstration below you can witness how the altitude of a satellite affects the dilation of time relative to Earth..."

There's also a nice if complex explanation of why your GPS receiever needs four satellite emitters to calculate the time bias of its clock.

a_shovel|3 years ago

I'm still of the opinion that handling leap seconds by ignoring them is a dumb idea.

Unix time should be a steady heartbeat counting up the number of seconds since midnight, January 1 1970. Nice, clean, simple. How you might convert this number into a human-readable date and time is out of scope/implementation-defined/an exercise for the reader/whichever variation of "not my problem" you prefer.

Tuna-Fish|3 years ago

> Unix time should be a steady heartbeat counting up the number of seconds since ...

This is nice and clean, so long as you have exactly one computer. The second there are more than one, and they are talking to each other, their clocks can go out of sync with each other. And they will, because they are physical systems that are imperfect and in general much less precise than you'd expect them to be.

This means that there has to be a way to correct for errors. The best method, that almost everyone who manages a lot of computers converges onto, is to "smear out" any errors, by never discretely changing the time on any machine, but just shortening or lengthening seconds slightly to bring any outliers back to the correct values. And once you have this system, dealing with a leap second using it is the easiest, simplest and least errorprone method.

I do think that there are purposes where local "machine time", which is just a monotonic clock counting upwards from bootup, would make sense. Especially when subsecond accuracy is important. But it should always be clear that there is no way to reliably convert between that and wallclock or calendar time. There are *no* intervals of calendar/wallclock time that reliably convert to any interval of machine time. It is not guaranteed that any wallclock minute contains exactly 60 machine seconds.

haberman|3 years ago

What is the actual benefit of this?

The cost is that conversion to/from civil time is far more complicated, and worse, cannot be computed for future dates for which leap seconds have not yet been determined.

I think that 86,400-second days with a 24-hour leap smear hits a sweet spot of utility and usability: https://developers.google.com/time/smear

There are very few applications that will know or care that seconds get 0.001% longer for 24 hours.

senko|3 years ago

ICYMI: The title is a reference to The Hitchhiker's Guide to the Galaxy: https://en.m.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_... ("time is an illusion, lunch time doubly so")

Rygian|3 years ago

Also the opening paragraph, paraphrasing "In the beginning the Universe was created. This has made a lot of people very angry and been widely regarded as a bad move."

lproven|3 years ago

There are multiple HHGTTG references in it, as is the author's Twitter bio:

> Vell, I'm just zis guy, you know?

(A quotation from Gag Halfrunt, Zaphod Beeblebrox's "personal brain care specialist".)

Dave3of5|3 years ago

Worked for a company recently that does test equipment for timing and sync.

I was surprised to see that very few systems other than computer (servers and suchlike) use unix timestamps. So GNSS systems have their own representation of time and the protocols that most network equipment uses is called PTP. Now-a-days most are trying to use the white rabbit protocol.

I was at a conference on this and there are a lot of crazy applications of these timing and sync technologies. The main is in 5G networks but the high frequency trading companies do all of there trading in FPGAs now to reduce time to make a decision so accurate timing is essential. Even the power companies are need high accuracy time to detect surges in powerlines. I spoke to someone from Switzerland who said they had serious security concerns as the timing fibres are poorly secured and open to easy disruption.

It was a very interesting domain to work in even though I only was do the app part of the thing. Didn't pay enough though and I was promised a promotion that never came.

throw0101a|3 years ago

> So GNSS systems have their own representation of time and the protocols that most network equipment uses is called PTP.

It's not so much that (Ethernet?) network equipment uses PTP, but rather to get the accuracies desired (±nanoseconds) there needs to be hardware involved, and that makes baking it into chips necessary. It's an IEEE standard so gets rolled into Ethernet.

Applications for PTP are things like electrical grid and cell network timings. Most day-to-day PC and server applications don't need that much accuracy.

Most office and DC servers generally configure NTP, which gives millisecond (10^-3) or tens/hunderds-microsecond (10^-6) accuracy. Logging onto most switches and routers you'll probably see NTP configured.

To get the most out PTP (10^-9) you need to generally run a specialized-hardware master clock.

mannykannot|3 years ago

"On Unix systems we measure time as the number of seconds since "the epoch": 00:00:00 UTC on January 1st, 1970.... this definition is not based on something sensical such as, say, the objective frequency of vibration of a Cesium-133 atom, but on a convenient fraction of the time it takes a particular large rock to complete a full rotation around its own axis."

Well, seconds have not been defined as "a convenient fraction of the time it takes a particular large rock to complete a full rotation around its own axis" for quite some time, and the origin is set to an abstract event in the past, which is not (as far as I know) subject to retroactive revision as a consequence of the vagarities of planetary or celestial motion (if it is, I would be fascinated to know more.)

krisoft|3 years ago

> seconds have not been defined as "a convenient fraction of the time it takes a particular large rock to complete a full rotation around its own axis" for quite some time

That is true.

> origin is set to an abstract event in the past

That is also true.

> which is not (as far as I know) subject to retroactive revision as a consequence of the vagarities of planetary or celestial motion

I'm afraid you are wrong on that. The unix time is synced with UTC. UTC has so called "leap seconds" scheduled at irregular intervals by the International Earth Rotation and Reference Systems Service to keep it in sync with the Earth's actual movements. So in effect the unix timestamp is wrangled to sync with the Earth's motion.

> if it is, I would be fascinated to know more

https://en.wikipedia.org/wiki/Unix_time#UTC_basis

https://en.wikipedia.org/wiki/Leap_second

thaumasiotes|3 years ago

> Well, seconds have not been defined as "a convenient fraction of the time it takes a particular large rock to complete a full rotation around its own axis" for quite some time

Seconds have not ever been defined that way, because the time it takes for the earth to complete a full rotation around its own axis (the "sidereal day") was never a question of much interest. It's mostly relevant to astronomers.

Seconds were always defined in terms of the synodic day, the time it takes for a point on the earth that is aimed directly at the sun to rotate around the earth until it is once again aimed directly at the sun.

They still are defined that way, in the sense that the only purpose of other definitions is to match the traditional definition as exactly as possible. If cesium started vibrating faster or slower, we'd change the "official definition" of a second until it matched the traditional definition. Given that fact, which definition is real and which isn't?

lazide|3 years ago

That’s a very recent change - and it’s not like 9192631770 transitions/hz (which is hilariously self referential!) is some obvious, natural value that ISN’T based on the historic ‘typical’ length of the day based on our rotation around the sun.

A second being 1/86400th of a day (24 hrs * 60 minutes * 60 seconds per minute) is still essentially true, and based, essentially still on the seasons and our movement around the sun (or relative movements between the various bodies in the solar system, depending).

Being a chaotic natural system, we of course need to add fudge factors here and there to simplify the math day to day while keeping it aligned with observed reality, like leap seconds and all), at least where it intersects with reality in a material way.

thomashabets2|3 years ago

Very interesting detail about linux and setting the time back to a value such that the boot time is before the epoch, with monotonic clocks.

Shamelessly I'll here remind people to never use gettimeofday() to measure time. E.g. setting the time backwards used to cause any currently running "ping" command to hang. (they fixed it about 10 years after I sent them a patch)

More fun examples of bugs like that at https://blog.habets.se/2010/09/gettimeofday-should-never-be-...

sumtechguy|3 years ago

I have made that same mistake a few times myself.

It comes from a natural inclination of I want something to expire some period from now.

The natural way is to say what time is it now. Figure out what time you want to expire with a date add. Then busy wait until that time expires using some form of gettime. The very big assumption you make is that the gettime methods always move forward. They dont.

This bug is easy to make thinking you are treating a wait item as a calendar event. Its not. You need to find something to busy wait on, that always counts up. Do not use the system clock. Also pick something that counts at a known fixed rate. Not all counters are equal. Some can skew by large margins after an hour and triggering when you do not expect. Which makes people want to reach for the system clock again. If you somehow decide 'i will use the clock' be prepared for some pain of the dozens of different ways that fails.

OliverJones|3 years ago

A long time ago -- late 1980s -- I worked in system software, VMS-based, at DEC.

Sometimes we used, for dev and testing, dedicated machines running with clocks set 20-25 years in the future. (They was a measurable investment of capital, power, and cooling back then.) This was smart: the remnants of DEC were able to sidestep the whole Y2K cluster**k.

Are our key tech vendors doing the same now? It's about a quarter century until the 2038 fin-de-siecle. I sure would like some assurance that the OSs, DBMSs, file systems, and other stuff we all adopt today will have some chance of surviving past 2038-01-19.

I know redefining time_t with 64 bits "solves" the problem. But only if it gets done everywhere. Anybody who's been writing programs for more than about 604 800 seconds has either thought through this problem or hasn't.

hwskdjf|3 years ago

Great blog post. Sometimes it's useful while testing to set random future and past times on Unix systems to see how programs handle that.

https://blog.darkinfo.org/timestomp-for-linux/

pfarrell|3 years ago

Mongodb, for one, will freak out if you set the system date into the future, interact with it, then set time back. It will think the indices are corrupt and refuse to start. At least that happened to me last year. IIRC, the time stamp is part of generated object ids, so it’s sort of understandable. In the end I returned by computer to the future date, exported data, and rebuilt my collections.

state_less|3 years ago

The article makes a passing reference to atomic clocks, which are fascinating. The folks over at MIT are working on improved clocks that can get below 100ms error over the the current lifespan of the universe.

https://news.mit.edu/2022/quantum-time-reversal-physics-0714

somat|3 years ago

CuriousMarc has an excellent episode on an atomic clock.

https://www.youtube.com/watch?v=eOti3kKWX-c

I like to pretend I can fix computers, this guy can actually fix computers, which is probably why I find his show so fascinating.

spirographer|3 years ago

Loved the article! There are so many great details such as the standardization of UTC happening after UNIX time invented, UNIX itself being born before the epoch, and all the great insight into the morass of 64 bit time across modern OSes.

Putting on my pedantic hat though, I see that East and West were switched in the discussion of Japan's unique 50/60Hz AC frequency split, and I can't get my mind off it. Hope you can make the edit.

emj|3 years ago

There have been patches the last ten years to lessen the impact of 2038, considering how close that is this is a bit worrisome. Nice to see a comparison of different systems like this, but you probably need to track this over time in some way. E.g Pretty sure Gnu date has seen patches about this in the last years.

mlichvar|3 years ago

The article missed an opportunity to describe how spectacularly can things break when the 32-bit time_t overflows in Y2038.

If you still have such a machine (preferably without any valuable data), try setting the date right before the overflow with this command

date -s @$[2**31 - 10]

and see if you can recover the system without reboot.

I have seen different daemons stopped responding and just consuming CPU, NTP clients flooding public servers, and other interesting things. I suspect many of these systems will still be running in Y2038 and that day the Internet might break.

_kst_|3 years ago

> date -s @$[2*31 - 10]

A digression: The documented bash syntax for arithmetic expansion is $(( EXPRESSION )) . I see that $[ EXPRESSION ] also works, but I don't see it documented anywhere. (Both syntaxes also work in zsh.)

wongarsu|3 years ago

Factories are full with machines that get replaced every couple decades, and that run software setups even older. Roughly a decade ago I was involved in the development of an embedded system for industrial use, and the approach to Y2038 was "doesn't matter, I'll be retired by then". The system is still sold.

I wouldn't be surprised if a lot of companies will handle it by just setting the clocks back 50 years or so on industrial equipment. But God have mercy on those that forget some systems.

kortex|3 years ago

Bottom line is: time is quite complicated, and things get messy when you try to overload different usages or engineering constraints. Including but not limited to:

- small, limited, and/or fixed data size

- compatibility across systems

- range of expressable times

- monotonicity (either locally or distributed)

- express absolute datetimes accurately

- express relative offsets accurately

- accurate over very short or very long timescales (or both)

- synchrony to solar noon

- synchrony to sidereal time

- timezones

- relativisitc effects

Pick any, uhhh, well pick as many as you can and try to shoehorn them together and you get your typical time implementation.

1letterunixname|3 years ago

Real-time: TAI64 TAI or go home.

Earth time: UT (UT1).

UTC is a vague approximation of UT1 using TAI and a leap seconds data source similar to tzdata.

On most modern CPUs, there is an accurate monotonic counter. The problem though is it doesn't know anything about when it was started or when it went to sleep in any time system.

Oh and bugs. Some systems interpret edge cases in one way, and others in others. Academic purity, interoperability: pick 1.

And then calendars are discontinuous through the years: October 15, 1582 and twice in 1752. They also vary by country which calendar system was in use when. The Julian and Gregorian calendars were in dual use until 1923.

Daylight savings time rules change. Timezones change. And then you get tzdata.

https://www.stjarnhimlen.se/comp/time.html

https://cr.yp.to/proto/utctai.html

gjulianm|3 years ago

Maybe I am misunderstanding the post, but for me the beauty of Unix time is precisely that all the weirdness with dates is abstracted away to the "conversion code" so that you only deal with "seconds". Timezones, leap seconds... all of that only matters when you're showing the user a date. For recording and calculations, it doesn't.

shagie|3 years ago

> Take the Traders’ method of timekeeping. The frame corrections were incredibly complex—and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth’s moon. But if you looked at it still more closely… the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind’s first computer operating systems.

Excerpt From A Deepness in the Sky by Vernor Vinge

CodesInChaos|3 years ago

> leap seconds

Unix time is UTC based, so it ignores leap seconds and deviates from how many seconds actually passed since the start of the epoch.

The actual number of seconds passed corresponds to TAI, but you can't convert future timestamps from TAI to UTC since you can't predict leap seconds, so you can't display future TAI timestamps using typical date notation.

> For recording and calculations, it doesn't.

Depends on what you're recording and calculating. Storing a UTC timestamp generally works when recording an event that already happened.

But it doesn't work for scheduling events like meetings, since there the authoritative time is often local time. If you simply store such future events in UTC you'll run into problems when the definition of the relevant timezone changes.

friendzis|3 years ago

> Timezones, leap seconds... all of that only matters when you're showing the user a date. For recording and calculations, it doesn't.

You cannot, by definition, tell how many Unix seconds later "third of may, 2025, at noon, local time" is - there is no way to convert future, local times to Unix times, because that "conversion code" is not fixed. Sure, we can reasonably expect that definition of time flow relationship to local time will not change for past dates, but one must expect these changes for times in the future.

zokier|3 years ago

UNIX timestamps are horrible. You can not do calculations with them, with fractional part they do not sort correctly, and you actually can not reliably convert them back and forth to iso timestamps. Basically to do anything useful, you need additional bit carried along the timestamp to tell if its leap second or not, which is extremely awkward and most APIs do not support that.

NKosmatos|3 years ago

Excellent write up, with very nice humorous style and clearly explaining the situation with the year 2038 problem.

nrvn|3 years ago

From Beat The Devil (1953):

Time. Time. What is time? Swiss manufacture it. French hoard it. Italians squander it. Americans say it is money. Hindus say it does not exist. Do you know what I say? I say time is a crook.

ElfinTrousers|3 years ago

There is one kind of time that is real and important. That is naptime.

8bitsrule|3 years ago

Haven't astronomers already found a way to solve this problem? by using the standard Epoch.

"there has also been an astronomical tradition of retaining observations in just the form in which they were made, so that others can later correct the reductions to standard if that proves desirable, as has sometimes occurred." [https://en.wikipedia.org/wiki/Epoch_(astronomy)]

jzl|3 years ago

I was surprised to learn that Linux has a “2262 problem” because of 64-bit time being used to store nanoseconds rather than seconds. That seems like a huge problem without an easy solution either. Yes there are almost 250 years to fix it but it seems like surprisingly bad planning. In any case it’s an interesting thought exercise to imagine what the fix should be.

thesuitonym|3 years ago

Do you think we'll still be using the same computer systems then?

tonmoy|3 years ago

Why is it bad planning? I can’t think of a better alternate plan.

Koshkin|3 years ago

What a mess. (Plus, time is relative, i.e. it depends on the frame of reference.)

1970-01-01|3 years ago

     UNIX time, like all times, is a very good one, if we but know what to do with it. -Ralph Waldo Emerson

dis-sys|3 years ago

very interesting stuff. wondering is there any in-depth walkthrough of the time keeping mechanisms in Linux?

fnordpiglet|3 years ago

This is why I eat Unix time for lunch.

sirmike_|3 years ago

Time is a flat circle.

gpderetta|3 years ago

Please. Time is obviously a cube.