The default behaviour of setTimeout seems problematic. Could be used for an exploit, because code like this might not work as expected:
const attackerControlled = ...;
if (attackerControlled < 60_000) {
throw new Error("Must wait at least 1min!");
}
setTimeout(() => {
console.log("Surely at least 1min has passed!");
}, attackerControlled);
The attacker could set the value to a comically large number and the callback would execute immediately. This also seems to be true for NaN. The better solution (imo) would be to throw an error, but I assume we can't due to backwards compatibility.
A scenario where an attacker can control a timeout where having the callback run sooner than one minute later would lead to security failures, but having it set to run days later is perfectly fine and so no upper bound check is required seems… quite a constructed edge case.
The problem here is having an attacker control a security sensitive timer in the first place.
That's just terrible input validation and has nothing to do with setTimeout.
If your code would misbehave outside a certain range of values and you're input might span a larger range, you should be checking your input against the range that's valid. Your sample code simply doesn't do that, and that's why there's a bug.
That the bug happens to involve a timer is irrelevant.
In nodejs you at least get a warning along with the problematic behavior:
Welcome to Node.js v22.7.0.
Type ".help" for more information.
> setTimeout(() => console.log('reached'), 3.456e9)
Timeout { <contents elided> }
> (node:64799) TimeoutOverflowWarning: 3456000000 does not fit into a 32-bit signed integer.
Timeout duration was set to 1.
(Use `node --trace-warnings ...` to show where the warning was created)
reached
I'm surprised to see that setTimeout returns an object - I assume at one point it was an integer identifying the timer, the same way it is on the web. (I think I remember it being so at one point.)
One could imagine an app that doubles the wait between each failed authentication attempt could exploit this by doggedly trying until the rate limiter breaks. Maybe not the most practical attack, but it is a way this behavior could bite you.
This type of thing is actually practical. Google Cloud Tasks have a max schedule date of 30 days in the future so the typical workaround is to chain tasks. As other commenters have suggested you can also set a cron check. This has more persistent implications on your database, but chaining tasks can fail in other ways, or explode if there are retries and a failed request does trigger a reschedule (I hate to say I’m speaking from experience)
True. Though if you have a need to trigger something after that much time, you might recognize the need to track that scheduled event more carefully and want a scheduler. Then you’ve just got a loop checking the clock and your scheduled tasks.
If we're pedantic, this doesn't actually do what's advertised, this would be waiting X timeouts worth of event cycles rather than just the one for a true Big timeout, assuming the precision matters when you're stalling a function for 40 days.
I haven’t looked at the code but it’s fairly likely the author considered this? eg the new timeout is set based on the delta of Date.now() instead of just subtracting the time from the previous timeout.
Each subtracted timeout is a 25 day timer, so any accumulated error would be miniscule. In your example there would a total of 2 setTimeouts called, one 25 day timer and one 15 day. I think the room for error with this approach is smaller and much simpler than calculating the date delta and trying to take into account daylight savings, leap days, etc. (but I don't know what setTimeout does with those either).
In response to this, I read the spec of setTimeout, bu I couldn't find the part where implementations may have an upper bound. Can someone more familiär with the specs point me in the right direction?
Adding a comment here to check back later because I'm curious now if someone has the answer. I thought it would be easy to find the answer, but I can't find it either. I figured it would say somewhere a number is converted to an int32, but instead I got to the part where there's a map of active timers[1] with the time stored as a double[2] without seeing a clear loss happening anywhere before that.
> In most JavaScript runtimes, this duration is represented as a 32-bit signed integer
I thought all numbers in JavaScript were basically some variation of double precision floating points, if so, why is setTimeout limited to a smaller 32bit signed integer?
If this is true, then if I pass something like "0.5", does it round the number when casting it to an integer? Or does it execute the callback after half a millisecond like you would expect it would?
You're correct about JS numbers. It works like this presumably because the implementation is written in C++ or the like and uses an int32 for this, because "25 days ought to be enough for everyone".
When implementing a tiny timing library in JS a few years back I found that most engines indeed seem to cast the value to an integer (effectively flooring it), so in order to get consistent behaviour in all environments I resorted to always calling Math.ceil on the timeout value first [1], thus making it so that the callbacks always fire after at least the given timeout has passed (same as with regular setTimeout, which also cannot guarantee that the engine can run the callback at exactly the given timeout due to scheduling). Also used a very similar timeout chaining technique as described here, it works well!
JS numbers technically have 53 bits for integers (mantissa) but all binary operators turns it into a 32-bit signed integer. Maybe this is related somehow to the setTimeout limitation. JavaScript also has the >>> unsigned bit shift operator so you can squeeze that last bit out of it if you only care about positive values: ((2*32-1)>>>0).toString(2).length === 32
Interestingly, this library seems to suffer from the opposite problem: where setTimeout can trigger earlier than expected, setBigTimeout can trigger never at all!
The problem is that when setBigTimeout is invoked with a floating-point number (and numbers are floating-point in JS by default), it keeps computing the time left till trigger in floating point. But FP numbers are weird:
> 1e16 - 1 == 1e16
true
At some point, they don't have enough precision to represent exact differences, so they start rounding, and this gets extremely more inaccurate as the value increases. For correct behavior, remainingDelay needs to be stored in BigInt.
Of course, this problem is mostly theoretical, as it starts happening at around 2^83 milliseconds, which doesn't even fit in a 64-bit time_t, and it's not like humanity will exist by then. But still!
Awesome! Already have a project I can use this on, thanks.
As a side note, why do you use this weird non-Github, non-Gitlab, non-Bitbucket sketchy looking git host? I can see the code obviously, but it makes me worry about supply chain security.
It’s made by Drew Devault who is mostly well-respected in the hacker community, and it’s made exactly to be an alternative to BigCo-owned source hosts like GitHub, Gitlab and Bitbucket.
instead of chaining together shorter timeouts, why not calculate the datetime of the delay and then invoke via window.requestAnimationFrame (by checking the current date ofc).
Are you suggesting checking the date every frame vs scheduling long task every once in a long while? Can't tell if it is ironic or not, I'm sorry (damn Poe's law). But assuming not, it would be a lot more computationaly expensive to do that, timeouts are very optmized and they "give back" on the computer resources while in the meantime
Unlike setTimeout, requestAnimationFrame callbacks are automatically skipped if the browser viewport is minimized or no longer visible. You wouldn’t want to miss the frame that matters!
Off the top of my head, a cron scheduler for a server that reads from a database and sets a timeout upon boot. Every time the server is reboot the timeouts are reinitialized (fail safe in case of downtime). If upon boot there’s a timeout > 25 days it’ll get executed immediately which is not the behavior you want.
We recently had a failed unit test because setTimeout(fn, 1000) triggered at 999ms. That test had ran more than a hundred times before just fine. Till one day it didn't.
setTimeout has no guarantees, and even if it did, your unit tests shouldn't depend on it.
Flaky unit tests are a scourge. The top causes of flaky unit tests in my experience:
- wall clock time ( and timezones )
- user time ( and timeouts )
- network calls
- local I/O
These are also, generally speaking, a cause of unnecessarily slow unit tests. If your unit test is waiting 1000ms, then it's taking 1000ms longer than it needs to.
If you want to test that your component waits, then mock setTimeout and verify it's called with 1000 as a parameter.
If you want to test how your component waiting interacts with other components, then schedule, without timers, the interactions of effects as a separate test.
Fast reliable unit tests are difficult, but a fast reliable unit test suite is like having a super-power. It's like driving along a windy mountainside road and the difference between one with a small gravel trap and one lined with armco barriers. Even though in both cases you can the safe driving speed may be the same, having the barriers there will give you the confidence to actually go at that speed.
Doing every you can to improve the reliably and speed of your unit test suite will pay off in developer satisfaction. Every time a test suite fails because of a test failing that had nothing to do with the changes under test, a bit more of a resume gets drafted.
I wonder if your 999ms was measured using wall-clock time or a monotonic time source? I imagine a wee time correction at an inopportune time could make this happen.
Go timers do have weird little details, in fact one little detail changed recently in 1.23 and broke my code. A third party dependency selects on sending to a channel and time.After(0); before 1.23, due to timer scheduling delay, the first case would always win if the channel has capacity to receive, but since 1.23 the timer scheduling delay is gone and the “timeout” wins half the time. The change is documented at https://go.dev/wiki/Go123Timer but unless you read release notes very carefully (in fact I don’t think the race issue is mentioned in 1.23 release notes proper, only on the separate deep dive which is not linked from release notes) and are intimately familiar with everything that goes into your codebase, you can be unexpectedly bitten by change like this like me.
This is excellent. But I was hoping for a setTimeout that survived JavaScript environment restarts. Maybe setBigReliableTimeout is in your future? Hahaha! :)
This reminds me of when I am trying to find something in a cabinet, but don’t really look very hard, and my wife will say “did you even try?” and find it in ~1second.
Just out of curiosity, what was the use case for a really long timeout? Feels like most if not all long timeouts would be best served with some sort of "job" you could persist, rather than leaving it in the event queue.
this is the thing with JS and TS - the types and stuff, it's all good until you realise that all integers are basically int 52 (represented as float 64, with 52 bits for the fraction).
Yes, it's nice and flexible - but also introduces some dangerous subtle bugs.
n2d4|1 year ago
arghwhat|1 year ago
The problem here is having an attacker control a security sensitive timer in the first place.
swatcoder|1 year ago
If your code would misbehave outside a certain range of values and you're input might span a larger range, you should be checking your input against the range that's valid. Your sample code simply doesn't do that, and that's why there's a bug.
That the bug happens to involve a timer is irrelevant.
wging|1 year ago
jandrese|1 year ago
userbinator|1 year ago
Sometimes the wait is over before I find the responsible code, and sometimes it does check server-side, but that's just part of the fun...
sfvisser|1 year ago
jackconsidine|1 year ago
Waterluvian|1 year ago
yifanl|1 year ago
keithwhor|1 year ago
ericyd|1 year ago
ballenf|1 year ago
Or maybe I'm missing your point.
n_plus_1_acc|1 year ago
zeven7|1 year ago
[1] https://html.spec.whatwg.org/multipage/timers-and-user-promp...
[2] https://w3c.github.io/hr-time/#dom-domhighrestimestamp
vilius|1 year ago
Just JS being JS: setTimeout(()=>{}, Infinity) executes immediately
ipython|1 year ago
[1] https://news.ycombinator.com/item?id=28340101
sehugg|1 year ago
yesco|1 year ago
I thought all numbers in JavaScript were basically some variation of double precision floating points, if so, why is setTimeout limited to a smaller 32bit signed integer?
If this is true, then if I pass something like "0.5", does it round the number when casting it to an integer? Or does it execute the callback after half a millisecond like you would expect it would?
arp242|1 year ago
DvdGiessen|1 year ago
[1]: https://github.com/DvdGiessen/virtual-clock/blob/master/src/...
blixt|1 year ago
purplesyringa|1 year ago
The problem is that when setBigTimeout is invoked with a floating-point number (and numbers are floating-point in JS by default), it keeps computing the time left till trigger in floating point. But FP numbers are weird:
At some point, they don't have enough precision to represent exact differences, so they start rounding, and this gets extremely more inaccurate as the value increases. For correct behavior, remainingDelay needs to be stored in BigInt.Of course, this problem is mostly theoretical, as it starts happening at around 2^83 milliseconds, which doesn't even fit in a 64-bit time_t, and it's not like humanity will exist by then. But still!
paulddraper|1 year ago
rzz3|1 year ago
As a side note, why do you use this weird non-Github, non-Gitlab, non-Bitbucket sketchy looking git host? I can see the code obviously, but it makes me worry about supply chain security.
malthejorgensen|1 year ago
It’s made by Drew Devault who is mostly well-respected in the hacker community, and it’s made exactly to be an alternative to BigCo-owned source hosts like GitHub, Gitlab and Bitbucket.
darepublic|1 year ago
augusto-moura|1 year ago
jw1224|1 year ago
unknown|1 year ago
[deleted]
sjaak|1 year ago
echoangle|1 year ago
keithwhor|1 year ago
bgirard|1 year ago
h1fra|1 year ago
bufferoverflow|1 year ago
We recently had a failed unit test because setTimeout(fn, 1000) triggered at 999ms. That test had ran more than a hundred times before just fine. Till one day it didn't.
xnorswap|1 year ago
Flaky unit tests are a scourge. The top causes of flaky unit tests in my experience:
These are also, generally speaking, a cause of unnecessarily slow unit tests. If your unit test is waiting 1000ms, then it's taking 1000ms longer than it needs to.If you want to test that your component waits, then mock setTimeout and verify it's called with 1000 as a parameter.
If you want to test how your component waiting interacts with other components, then schedule, without timers, the interactions of effects as a separate test.
Fast reliable unit tests are difficult, but a fast reliable unit test suite is like having a super-power. It's like driving along a windy mountainside road and the difference between one with a small gravel trap and one lined with armco barriers. Even though in both cases you can the safe driving speed may be the same, having the barriers there will give you the confidence to actually go at that speed.
Doing every you can to improve the reliably and speed of your unit test suite will pay off in developer satisfaction. Every time a test suite fails because of a test failing that had nothing to do with the changes under test, a bit more of a resume gets drafted.
jonathanlydall|1 year ago
Maybe the system clock did a network time synchronisation during the setTimeout window.
_flux|1 year ago
gregoriol|1 year ago
steve_adams_86|1 year ago
steve_adams_86|1 year ago
This is fun, though. JS is a bucket of weird little details like this.
oefrha|1 year ago
keepamovin|1 year ago
alamortsubite|1 year ago
bhauer|1 year ago
issafram|1 year ago
maxbond|1 year ago
https://git.sr.ht/~evanhahn/setBigTimeout/tree/main/item/mod...
adregan|1 year ago
https://git.sr.ht/~evanhahn/setBigTimeout/tree/main/item/mod...
unknown|1 year ago
[deleted]
Minor49er|1 year ago
zgk7iqea|1 year ago
miiiiiike|1 year ago
graypegg|1 year ago
BillyTheKing|1 year ago
Yes, it's nice and flexible - but also introduces some dangerous subtle bugs.
hiccuphippo|1 year ago
throwaway14356|1 year ago
ingen0s|1 year ago
keyle|1 year ago
hmaxdml|1 year ago