top | item 21521694

Just how fast is too fast when it comes to web requests?

82 points| weinzierl | 6 years ago |rachelbythebay.com

82 comments

order
[+] FooBarWidget|6 years ago|reply
There is such a thing as too fast. Flight ticket comparison websites like skyscanner.com insert fake progress delays ("scanning airline website") to make it seem like they're spending a lot of effort to do important work for you. Research has shown that, without that delay, users trust such websites less, or value them less, because the instantaneous response time is equated to less valuable work.
[+] mepiethree|6 years ago|reply
Wow, I am very different than those users. I just think "how fucking hard can it be to show me the same flight prices you showed me 30 seconds ago," and then switch to Google Flights
[+] ricardobeat|6 years ago|reply
While that's true of many products, I don't think it applies to SkyScanner - searching for air fares, especially aggregating multiple sources is actually really slow. They might be setting a minimum loading time so the search doesn't jump around so much, but it would never be even close to the timescales involved in the post. AFAIK Google is the only one that can do it faster due to having their own 'index'.
[+] Kaveren|6 years ago|reply
I hear this every time TurboTax gets brought up. I think it's a lot healthier to foster good relationships between users and tech so that they don't have to distrust instantaneous actions.

As an aside, I'm also suspicious of research that shows ~100ms as the threshold for "instantaneous" action, because in video games like FPS shooters most players seem to dislike playing on ping that high.

[+] mcv|6 years ago|reply
I remember back in university in the 1990s, someone in sysadmin added some sleep() to the startup script for Netscape with a comment like "pretend we're doing something".

I prefer things to be instantaneous. Slowness suggests it's been cobbled together by incompetents.

[+] ksec|6 years ago|reply
But there are lots of other factors involved.

We could get a Sub 300ms Response and Rendered, but if those results were not good, it would be seen as fast but inaccurate. No one complain about Google Results in its early days, it was fast, and comparatively accurate.

There is also a problem of varying response time, in ideal condition, all query and searches should be returned in roughly the same amount of time, that is because user do not understand there is a different in query complexity, to them all query are the same.

Without some sort of animation, user will be surprised to see their work being served and done without them really knowing and noticing, that could be both good and bad in different situations. ( Read Safari Rendering Progress Bar )

[+] jdc|6 years ago|reply
You wouldn't happen to have a source handy for said research, would you? Genuinely curious.
[+] jka|6 years ago|reply
I just took a quick look at Skyscanner and couldn't initially see this happening - do you have a link or example?

It's been a long time since I worked there but there was generally strong opposition to masking any delays.

That said it's frequently necessary to retrieve the latest fare quotes from the airlines (which involves network calls naturally), so that the user doesn't end up clicking through to wildly different pricing/availability -- but departure/arrival/carrier information about flights should generally be displayed without delay (unless things have changed - glad to see an example if so!).

[+] silveroriole|6 years ago|reply
Like those cookie banners where you CAN opt out, but you get a spinner for about 30 seconds while the website pretends it’s soooooo difficult to save your preferences...
[+] rajacombinator|6 years ago|reply
Ha. When building a high ticket price “AI” product recently, it definitely occurred to me that some level of slowness in compute time could definitely be a “feature not bug” from the user’s perspective. (eg. AI thinking really hard!) We didn’t have to implement that option in the end (normal solution was sufficiently slow) but there’s definitely merit in not making your product appear trivial.
[+] TeMPOraL|6 years ago|reply
That does say a lot about the mindset of such sites. Instead of showing off their superior quality and reassuring the users that they are indeed that better than the competition, they would instead lie to the user and offer worse experience.
[+] heavenlyblue|6 years ago|reply
That does say a lot about the users, but not about the request speed.
[+] Deimorz|6 years ago|reply
Hacker News itself is a good example of a related topic: voting here feels fast because the vote button disappears as soon as you click it, but it's actually very slow.

If you open your browser's Network panel in dev tools and vote on something, you'll see that it sends a request, gets back a 302 redirect, and then does another request to load a whole new copy of the page you were voting from in the background (and then just discards it). At least from my location, it consistently takes about 1.2 seconds for each vote to finish, even though it feels instant while using the site.

One consequence of this is that if you vote on multiple comments quickly, some of your votes are probably being lost with no indication. If you try to vote on something else before the first vote has fully finished, the second one gets a 503 error, but there's no indication of this at all.

It happens to me often - I read a good reply comment, vote it up, and then immediately vote up its parent (which I've already read) as well, since it resulted in that good comment. If I come back to the page later I'll notice that my vote on the parent didn't go through, and if you open the Network panel and try this, you'll see it - the second vote 503s if your second click was before the first one finished, but the site acts the same whether it failed or not.

[+] TeMPOraL|6 years ago|reply
That's a good observation. I've noticed it happening too, since I have a similar upvoting pattern: I tend to read a whole subtree, and then go back up, rapidly upvoting comments in it I considered insightful. I've noticed that the votes sometimes don't register, which is why I periodically reload the comment thread and reupvote comments.

'dang, is there any chance this gets improved?

[+] theandrewbailey|6 years ago|reply
I also notice that votes aren't registered if I close the tab immediately (within 1 second) after voting.
[+] narsil|6 years ago|reply
This is how I felt about Algolia's search when I first enabled it for our Vuepress site at https://developers.kloudless.com/guides/enterprise/ (the search bar at the very top). I assumed it had loaded some kind of index in memory via JavaScript since the XHR requests take < 30 ms (!) from my location in San Francisco, which is pretty much instant. That's faster than the delay between my keystrokes.
[+] benbristow|6 years ago|reply
Returns in the same speed from Scotland (Glasgow) too. One request was as low as 16ms. Impressive.
[+] ddorian43|6 years ago|reply
Algolia is in-memory & sorted. So it can do A LOT of early termination of queries and no sorting on query-time.
[+] neiman|6 years ago|reply
I wrote a comment system for articles once that was super-duper fast.

The result was that it turned into a chat, so we had to add a "fake delay" for people to treat is as a serious comment system.

[+] ricardobeat|6 years ago|reply
Here's a classic from NNG on 'UX time scales': https://www.nngroup.com/articles/powers-of-10-time-scales-in...

You can infer from there that anything around/under 100ms will feel like direct manipulatio, interpreted by the person in the post as 'no request was sent to the server'. The same interpretation might not result from a non-technical person, it will just feel 'different' or they won't notice what happened and submit it multiple times, if there is no success message.

You can also trace it back to one of their 10 heuristics: visibility of system status. If users cannot perceive a change because it happened too fast, the UI has failed and users don't know what happened. One of the reasons some websites add artificial delays, as mentioned in other comments, is not only to signify 'work' being done, but that flashing a spinner for a split second is also a bad experience. You're better off normalizing every action to take at-least-one-second, and ensuring the state of the system is always clear.

[+] arkadiyt|6 years ago|reply
The server hosting this website doesn't support TLS 1.3 - if it did then you'd have 0 round trip time (0-RTT) session resumption and it would be nearly identical to the http latency.
[+] Thorrez|6 years ago|reply
Do servers automatically support 0-RTT? I thought generally you have to explicitly enable 0-RTT because it's vulnerable to replay attacks. Generally you would only enable it for idempotent requests, and feedback is not idempotent (unless the database explicitly rejects duplicate feedback).
[+] disposedtrolley|6 years ago|reply
I've been on the implementation side of this kind of thing more times than I'm proud of.

Hardcoded delays are especially prevalent in systems which attempt to emulate a human operator, such as virtual assistants which are starting to replace human live chat agents. The excuse is always UX related. Progressive disclosure is cited a /lot/. Apparently users get a better experience when systems pretend to be human and respond slowly, so we would hardcode delays which were a function of the length of the response message.

[+] mcqueenjordan|6 years ago|reply
No such thing as too fast. If a user is confused about the interaction because of how fast it is, then that's a UX problem to fix.

Speed is one of the most important properties of exceptional user experiences.

I'm building a developer tool and I'm ruthlessly optimizing for speed. Waiting 20ms for a CLI command versus 100-300 is a huge difference.

[+] afiori|6 years ago|reply
> If a user is confused about the interaction because of how fast it is, then that's a UX problem to fix

In some cases the confusion comes from a "did this do any work at all" question. like git branching compared to svn for big projects. As other have brought up, instantaneous reply in airline comparison sites can cause concern of how deep the search was.

A similar paradox is with psychiatrist hourly rates or the placebo effect. The high price (delay) is part of the therapy (interface).

[+] z3t4|6 years ago|reply
You need to tell the user that the message has been sent. If nothing happens when you press the button (just an ajax call, no refresh) the user will think there is something wrong.

The reverse can be used in an UI to tell the user that something went wrong, for example in a window pull-down menu, don't hide the menu right away, do what the user request, then hide the menu, so if it the request didn't complete, the menu will still be visible.

[+] rachelbythebay|6 years ago|reply
Hi, the web page in question has always had something to say what's going on. It's not beautiful but it does tell you that things are happening.

Right before it kicks off the call to the server, it lights up something to say "Submitting feedback", and as soon as it finishes, it flips that to "Feedback saved" (which now has a time attached).

Odds are, most people have never actually noticed the first message, since it is quickly replaced with the second.

The messages appear just to the left of the button which was just clicked (and right under the text field). So, in theory, it's right by where your eyes are looking anyway.

But, here we are.

[+] jobigoud|6 years ago|reply
In a hobby desktop app, I show a splash screen before the UI is fully loaded. After optimizing startup time, it's now at a point where a hot start is very fast, less than 300 ms, to the point that you can't really read anything on the splash screen. Cold start still takes a second or so.

Is there a best practice here? At which point do you stop showing a splash screen? I've seen applications where the splash screen lingers even after the UI is loaded, which seems weird to me and gets in the way of getting things done.

[+] spondyl|6 years ago|reply
I remember attending a talk about speeding up web UIs once and it got to Q&A time.

I’d read some article about how if you respond too quickly, users can begin to doubt that any work is really being performed, and I’ve experienced that feeling a handful of times over the years myself.

Anyway, I asked the speaker that and everyone just kind of laughed, which it does seem a little absurd on the face of it.

I guess it’s also pretty far from most peoples minds given the web is caked in unnecessary bloat a lot of the time :)

[+] jchw|6 years ago|reply
It’s not that people are used to slow stuff, even if they are. It’s that there is a psychological “magic number” whereby something is short enough to seem instant.

There’s different values and tons of articles about this so I’ll just link a random one. I don’t know if there are formal studies on it but I fully believe in the idea that there is a magic “instantaneous” feeling threshold, just from personal experience, especially with tweaking animation delays.

https://www.nngroup.com/articles/response-times-3-important-...

[+] jasonlfunk|6 years ago|reply
I've run into this before too. Sometimes I've actually added a delay timer to buttons that show loading spinners so that the spinner appears long enough for the user to see more than a flash. Is there a better option?
[+] hayksaakian|6 years ago|reply
Another common option is a "toast notification" (see android) or other separate confirmation message to acknowledge your action.

"Sent X" or "Finished Y!" is sufficient to distinguish a failed ajax call from a successful button press

[+] TeMPOraL|6 years ago|reply
'hayksaakian mentioned toasts, which are global, but I think another option would be "toasts" local to the button. E.g. flash a checkmark next to the button user clicked for a second, indicating the action was registered and completed. Key here is: next to, and asynchronous. The checkmark can stay alive for a second, but the button shouldn't be blocked for that time. If the user clicks it again immediately, it should "refresh" the success indicator, or add another one.

UIs in games have this solved in countless of ways.

[+] rane|6 years ago|reply
You don't need a spinner if feedback from clicking the button is sufficiently quick.

According to Google that would be around 100ms.

> Guidelines:

> Process user input events within 50ms to ensure a visible response within 100ms, otherwise the connection between action and reaction is broken. This applies to most inputs, such as clicking buttons, toggling form controls, or starting animations. This does not apply to touch drags or scrolls.

https://developers.google.com/web/fundamentals/performance/r...

[+] jraph|6 years ago|reply
Make them solve a reCAPTCHA when clicking on the button. Make sure that the captcha takes at least one second to appear. This will make them feel that something is sent and validated.
[+] jdnenej|6 years ago|reply
Dont use a spinner if it loads fast. One thing I have done is adding a little white box with the word "loading..." In it. Users don't have to wait for any animation to see what it means and it still looks fine for longer loading times.
[+] roselan|6 years ago|reply
I use jquery ui "highlight" effect as visual confirmation for action completion. It's a background easing animation, basically it "flashes" before fading out.

It might not be fancy but it's simple and helps.

[+] baud147258|6 years ago|reply
I feel like it's an issue we will never have on the project I'm currently working on, all requests have a noticeable delay. I guess there's too many layers on the back-end. And maybe some requests are totally not optimised (like doing 20 select 1 elt instead of select 20 elts). And maybe there's no enough caching. Or maybe the thing we're caching are not those that matter… But first we still have to migrate off Internet Explorer (or at least support another browser).
[+] _squared_|6 years ago|reply
> It's a link in the SF Bay Area, and the server is in Texas, so it has to get out there and back. That's at least 50 milliseconds right there when measured by a boring old ping.

Am I the only one surprised by this 50ms ping? I can reach cloud servers in the SF Bay Area from Paris in 50ms - and I'm on wifi. Surely SFO-TX should take much less time..?

[+] Thorrez|6 years ago|reply
According to this it takes 42ms to get from Paris to SF at the speed of light in a fiber in the great circle path across Earth's surface. Ping is rtt, so that would be 84ms.

https://www.wolframalpha.com/input/?i=paris+to+san+francisco...

On the topic of websites taking a long time to compute something, Wolfram Alpha is slow. I wonder if any of that slowness is artificial like flight price websites.

[+] scarejunba|6 years ago|reply
It's like if I go to the restaurant and order a steak and you bring it out right away. I'm going to be suspicious.
[+] ken|6 years ago|reply
Good point.

“Are we to believe that boiling water soaks into a grit faster in your kitchen than on any place on the face of the earth?”

It’s simply Occam’s Razor. Which is more likely in 2019: a webpage is fast, or a webpage has a JavaScript bug?

[+] zeristor|6 years ago|reply
Nice pointer about NTP steps and time-smearing, can anyone recommend a good website for dealing with these issues?

I may not be working on real-time systems at the moment, but I’ve had enough exposure to them in the past that I’d like to scratch that itch.

[+] ricardobeat|6 years ago|reply
While it's a good idea to use performance.now() for it's better precision* and monotonic guarantee, it's not really a huge concern for this kind of application (measuring time between A and B on the client). You're extremely unlikely to experience clock skew during those brief windows, and the entire web relied on Date.now for performance monitoring for decades.

For dealing with timestamps reliably on the client, we'll just instantiate all dates based on server time instead.

* at least in FF it has been rolled back to 1ms resolution due to privacy/fingerprinting concerns

[+] quantified|6 years ago|reply
To the last comment within the article: yes, we’re accustomed to laggy websites. Most sites have tons of chatter/bloat/trackers. Refreshing to encounter those that don’t. Thankfully HN is fairly low-bloat itself.
[+] calpaterson|6 years ago|reply
Apache, CGI and a C++ handler is refreshingly old fashioned.