top | item 3580631

"Unknown or Expired Link" - A failure in gauging user intent

15 points| unconed | 14 years ago | reply

The purpose of a user interface is to enable a user to achieve desired outcomes. It is essential to correctly interpret user intent, and execute on that intent.

And yet HackerNews, home of fiercely competing start-ups and friendly web apps, frequently throws a hissy fit with its 'unknown or expired link' issue, which ironically happens more frequently as you spend more time actually reading the site's content rather than mindlessly clicking through.

But not only does the site often fail to satisfy that request, it provides no method of recovery. The user can't refresh or go back a page and try again, as the IDs in URLs are meaningless and expire constantly. Context has been destroyed, and the only way to achieve your desired outcome is to start from the front page again and click through faster.

Why not annotate each 'more' link with the numeric page offset? Or with the ID of the last article on the page? Then if the cached listing is no longer available, you at least have a hope of recovering and presenting something useful instead.

The only logical explanation I can think of is that it's some sort of masochistic way of keeping people from wasting too much time on HackerNews.

6 comments

order
[+] sycr|14 years ago|reply
This has been discussed before. Here is the conversation:

http://news.ycombinator.com/item?id=3098756

And pg's explanation:

It's not so much that it's ahead of its time relative to hardware as it is something you do in the early versions of a program.

Using closures to store state on the server is a rapid prototyping technique, like using lists as data structures. It's elegant but inefficient. In the initial version of HN I used closures for practically all links. As traffic has increased over the years, I've gradually replaced them with hard-coded urls.

Lately traffic has grown rapidly (it usually does in the fall) and I've been working on other things (mostly banning crawlers that don't respect robots.txt), so the rate of expired links has become more conspicuous. I'll add a few more hard-coded urls and that will get it down again.

[+] gregjor|14 years ago|reply
The closure explanation would make more sense if paging through lists of links/articles wasn't one of the oldest solved problems of web site UI. Closure may be a quick & elegant rapid prototyping technique but probably not as quick and useful and doing what thousands of other web sites already do in a few lines of code.
[+] adir1|14 years ago|reply
I just got that 2 times in a row, I wonder if they just don't have budget to improve the site?
[+] brudgers|14 years ago|reply
>"The only logical explanation I can think of is that it's some sort of masochistic way of keeping people from wasting too much time on HackerNews."

My understanding is that the logical explanation is that it works well enough for the intended purpose - i.e. it's functional.

[+] jmitcheson|14 years ago|reply
What's the big deal? Hit F5 before clicking "More" or open the "More" link when you start your session, and keep it open in the background until you need it.
[+] unconed|14 years ago|reply
There's always the one engineer who considers annoying workarounds a normal part of using software.

You're not addressing the problem, only delaying it by one page.

Most links posted here get torn to shreds over far more trivial things.