This library doesn't deal with the notion of local time at all. It's all UTC-based times, possibly with a user-supplied timezone offset, but then the hard part of calculating the timezone offset must be done by the caller.
I do think the documentation could be a little clearer. The author talks about “time zones” but the library only deals with time zone offsets. (A time zone is something like America/New_York, while a time zone offset is the difference to UTC time, which is -14400 seconds for New York today, but will be -18000 in a few months due to daylight saving time changes.)
I find the three different time representations/sizes curious (eg, what possible use case would need nanosecond precision over a span of billions of years?). More confusing is that there's pretty extreme time granularity, but only ±290 years range with nanosecond precision for time durations?
> what possible use case would need nanosecond precision over a span of billions of years?
Once you've decided you're using nanosecond precision, a 64-bit representation can only cover 584 years which ain't enough. You really want at least 2 more bits, so you can represent 2024 years.
But once you're adding on 2 bits, why not just add on 16 or even 32? Then your library can cover the needs of everyone from people calculating how it takes light to travel 30cm, to people calculating the age of the universe.
That's how I imagine the design decisions went, anyway :)
Of course you can't really provide sub-second accuracy without leapsecond support and what does pre-human-civilisation leapsecond support even mean?
Related tangent: databases should track units. If I have a time column, I should be able to say a column represents, say, durations in float64 seconds. Then I should be able to write
SELECT * FROM my_table WHERE duration_s >= 2h
and have the database DWIM, converting "2h" to 7200.0 seconds and comparing like-for-like during the table scan.
Years ago, I wrote a special-purpose SQL database that had this kind of native unit handling, but I've seen nothing before or since, and it seems like a gap in the UI ecosystem.
And it shouldn't be for time. We should have the whole inventory of units --- mass, volume, information, temperature, and so on. Why not? We can also teach the database to reject mathematical nonsense, e.g.
SELECT 2h + 15kg -- type error!
Doing so would go a long way towards catching analysis errors early.
I think it’s important to be explicit about whether or not signed integers are used. From reading the document it seems that they may be signed but they could not be. If they are signed then you could have multiple bit strings that represent the same date and time which is not great.
Definitely signed - “use negative duration to subtract”
But bit pattern is an issue internal to the library. If you can find a bug in the code, certainly point it out and offer a fix if it’s in your skillset.
As a PostgreSQL smallish contributor I just can say: NO, DON'T DO THIS!!!!
Extensible type system is a worst thing that could happend with database end-user performance. Then one may not short-cut no single thing in query parsing and optimization: you must check type of any single operand, find correct operator implemenation, find correct index operator family/class and many more all through querying system catalog. And input/output of values are also goes through the functions, stored in system catalog. You may not even answer to "select 1" without consulting with system catalog.
There should be sane set of builtin types + struct/json like way of composition. That is like most DBs do except PostgreSQL. And I strongly believe it is right way.
This is a sort of lazy Ask HN: but in your experience, what is more useful / valuable - nanosecond representation, or years outside the nano range of something like 1678-2200
I don't do "proper" science so the value of nanoseconds seems limited to very clever experiments (or some financial trade tracking that is probalby even more limited in scope).
But being able to represent historical dates seems more likely to come up?
Why not go golang style, unix timestamp as nanoseconds, in signed int64. Maybe you can't cover millions of years with nanosecond precision, do you really need it?
With that precision and size, you can only cover the years from 1678 to 2262, which strongly limits your ability to represent historical dates and times.
I can think of a few plausible reasons, but the only one that is really significant is "what epoch"? In the case of UNIX-based systems and systems that try to mimic that behaviour, that is well defined. But as you haven't said what your complaints are, it's hard to provide any counterpoint or justification for why things are as they are.
> time_date(1311, 11, 18)
That isn't defined in the epoch used by most computer systems, so all bets are off. Perhaps it'll return MAX_INT, MIN_INT, 0, something that's plausible but doesn't take into calendar reforms that have no bearing on the epoch being used, or perhaps it translates into a different epoch and calculates the exact number of seconds, or anything else. One could even argue that there are no valid epochs before GMT/UTC because it was all just local time before then.
But of course, you can argue either way whether -ve values should be supported. Exactly 24 hours before 1970-1-1 0:00:00 UTC could be reasonably expected to be -86400, on the other hand "since" strongly implies positive only.
Other people might have entirely different epochs for different reasons, again within the domain it's being used, that's fine as long as everyone agrees.
alberth|1 year ago
https://stackoverflow.com/questions/6841333/why-is-subtracti...
And computerphile explains so well in their 10-min video:
https://www.youtube.com/watch?v=-5wpm-gesOY
---
I've long ago learned to never build my own Date/Time nor Encryption libraries. There's endless edge cases that can bite you hard.
(Which is also why I'm skeptical when I encounter new such libraries)
sltkr|1 year ago
I do think the documentation could be a little clearer. The author talks about “time zones” but the library only deals with time zone offsets. (A time zone is something like America/New_York, while a time zone offset is the difference to UTC time, which is -14400 seconds for New York today, but will be -18000 in a few months due to daylight saving time changes.)
mynameisash|1 year ago
michaelt|1 year ago
Once you've decided you're using nanosecond precision, a 64-bit representation can only cover 584 years which ain't enough. You really want at least 2 more bits, so you can represent 2024 years.
But once you're adding on 2 bits, why not just add on 16 or even 32? Then your library can cover the needs of everyone from people calculating how it takes light to travel 30cm, to people calculating the age of the universe.
That's how I imagine the design decisions went, anyway :)
Of course you can't really provide sub-second accuracy without leapsecond support and what does pre-human-civilisation leapsecond support even mean?
unknown|1 year ago
[deleted]
nalgeon|1 year ago
quotemstr|1 year ago
Years ago, I wrote a special-purpose SQL database that had this kind of native unit handling, but I've seen nothing before or since, and it seems like a gap in the UI ecosystem.
And it shouldn't be for time. We should have the whole inventory of units --- mass, volume, information, temperature, and so on. Why not? We can also teach the database to reject mathematical nonsense, e.g.
Doing so would go a long way towards catching analysis errors early.zokier|1 year ago
n_plus_1_acc|1 year ago
davidhyde|1 year ago
jagged-chisel|1 year ago
But bit pattern is an issue internal to the library. If you can find a bug in the code, certainly point it out and offer a fix if it’s in your skillset.
unknown|1 year ago
[deleted]
kaoD|1 year ago
How so?
simontheowl|1 year ago
cryptonector|1 year ago
funny_falcon|1 year ago
Extensible type system is a worst thing that could happend with database end-user performance. Then one may not short-cut no single thing in query parsing and optimization: you must check type of any single operand, find correct operator implemenation, find correct index operator family/class and many more all through querying system catalog. And input/output of values are also goes through the functions, stored in system catalog. You may not even answer to "select 1" without consulting with system catalog.
There should be sane set of builtin types + struct/json like way of composition. That is like most DBs do except PostgreSQL. And I strongly believe it is right way.
lifeisstillgood|1 year ago
I don't do "proper" science so the value of nanoseconds seems limited to very clever experiments (or some financial trade tracking that is probalby even more limited in scope).
But being able to represent historical dates seems more likely to come up?
Thoughts?
cyberax|1 year ago
Simply reducing the precision to 10ns will provide enough range in practice.
rokkamokka|1 year ago
unknown|1 year ago
[deleted]
Abismith|1 year ago
[deleted]
out_of_protocol|1 year ago
commodoreboxer|1 year ago
nalgeon|1 year ago
zokier|1 year ago
I wonder what does
return?ralferoo|1 year ago
I can think of a few plausible reasons, but the only one that is really significant is "what epoch"? In the case of UNIX-based systems and systems that try to mimic that behaviour, that is well defined. But as you haven't said what your complaints are, it's hard to provide any counterpoint or justification for why things are as they are.
> time_date(1311, 11, 18)
That isn't defined in the epoch used by most computer systems, so all bets are off. Perhaps it'll return MAX_INT, MIN_INT, 0, something that's plausible but doesn't take into calendar reforms that have no bearing on the epoch being used, or perhaps it translates into a different epoch and calculates the exact number of seconds, or anything else. One could even argue that there are no valid epochs before GMT/UTC because it was all just local time before then.
But of course, you can argue either way whether -ve values should be supported. Exactly 24 hours before 1970-1-1 0:00:00 UTC could be reasonably expected to be -86400, on the other hand "since" strongly implies positive only.
Other people might have entirely different epochs for different reasons, again within the domain it's being used, that's fine as long as everyone agrees.
Or did you have some other objection?
nalgeon|1 year ago