top | item 4300647

Can't somebody fix the "Unknown or expired link" bug?

58 points| MortenK | 13 years ago | reply

Surely it should be possible to solve this problem.

discuss

order
[+] ralph|13 years ago|reply
This is an old problem due to the use of fnid to refer to a continuation. http://news.ycombinator.com/item?id=164254

(The poor formatting of sed is caused by the post being made dead; it triggered a re-submission of the content causing corruption.)

[+] samwillis|13 years ago|reply
How about this as a fix that retains the current design and functionality.

As well as including the fnid in the pagination links also include a pg=1...n item that is only used as a fall-back if the continuation has been garbage collected. That way you retain the continuation design that enables the user to see the list continued in the order that it was on the last page but if the continuation has been collected it takes the most current ordering and returns page n.

If I had the time this afternoon I would have a look though the code to see if this would work but unfortunately don't. Is there anyone here that is familiar with the code base who could asses if this is a simple change?

[+] codex|13 years ago|reply
It doesn't reflect well on YC that the site is so embarrassing technically. I will now force a speedy fix by suggesting that the problem lies with Lisp.
[+] quesera|13 years ago|reply
I know you're kidding, I'm just not sure if you're half-kidding or whole-kidding.

Would YC be more successful if HN was technically improved? Certainly not. Would HN be more successful? Hard to say, but probably irrelevant -- HN is the social focus of the startup community. This has rewards for YC, but comes with taxes as well.

Regardless, there is no trend away from HN visible today, and the cobbler has other work.

[+] brlewis|13 years ago|reply
:-) I don't think pg needs to prove that it's possible to explicitly carry state from request to request using Arc. You can do that in any language. Continuation passing is a cool hack, though I wouldn't use it myself on a production web site that needed to scale. And I do use Scheme.
[+] rcamera|13 years ago|reply
This bug doesn't only happens to the homepage, but also to old threads, where when you click "More" at the bottom of the page to read the next page of comments, you get such error message instead. If you wait a bit, go back to the previous page, refresh and click on "More" again, it goes away, but it is certainly annoying.
[+] ralph|13 years ago|reply
There's no need to "wait for a bit". Just go back and reload the page in order that you have fresh fnids that stand a chance of still being in the cache when you click the link.
[+] sidcool|13 years ago|reply
Problem? I thought this was a feature. It happens when the user does not interact with a page for some time. I thought HN has it so that the users are forced to refresh the screen and get the latest news. Am I wrong here?
[+] vegashacker|13 years ago|reply
Ha. Never thought of that. But no, it's not a feature in the sense of something that was designed with this goal in mind. It's a way of not having to design and implement a whole url scheme, and instead being able to easily run arbitrary code in response to a link being clicked. This architecture, however, has the undesirable (expect apparently to you) side-effect of occasional "unknown or expired link"s.
[+] scotty79|13 years ago|reply
Couldn't pg just toss additional GB of RAM int the server and dedicate it to continuations cache? Just extending the time from few minutes to few hour or days should be enough.
[+] jpswade|13 years ago|reply
I think it's a feature not a bug.

It seems to stop the second page becoming stale, forcing you to refresh and get a new copy of the front page first preventing you from viewing outdated content.

[+] sirclueless|13 years ago|reply
While I agree it's a feature, I'm pretty sure it works the opposite way. The idea is that you want the second page to be "stale": if you've just spent time reading the front page, when you click to the second page you should be getting all new links.

This comes at a cost though, because the server must cache all the possible second pages that might be requested. So HN makes a compromise and caches them for an hour or so. If you wait an hour, it will have been evicted from the cache and the error will appear.

[+] se85|13 years ago|reply
Yes, it is a feature, a feature that leads to a terrible user experience, so while it may not be a bug due to it being by design, its still a really big defect.

In my personal instance: I would be much more of a contributor to HN If I didn't have this problem on a daily basis.

Usually it happens after I have tabs open, go get some food, come back, and I need to start my browsing on HN all over again as a result, usually this only happens once, because I say fuck it and go elsewhere.

What are the pro's that make this con worth it? I'm not seeing it, its just a big massive pain in the ass.

[+] ralph|13 years ago|reply
No, it's a bug, a design flaw in using continuations. See my other comment. If it was by design then the links wouldn't continue to work for longer at quiet times on the site; they do simply because old continuations haven't been flushed to make room for new ones.
[+] corkill|13 years ago|reply
I find it handy to indicate I probably spent to much time on the site. Also pretty sure it's by design.
[+] iwwr|13 years ago|reply
What about the app automagically saving the linked content in a cache and if the link ever goes dead, or content vanishes, to present the cached version to users?
[+] ralph|13 years ago|reply
I'm unclear what you're suggesting but your up-votes suggest I'm alone. :-) The fnid (function ID) is the index into the cache storing the continuations. It's this cache that's dropping `old' entries as new ones are added. Depending on how busy the site is, that takes a varying amount of time.
[+] philh|13 years ago|reply
I think the problem isn't that it's hard to solve, just that it would take time which pg is spending on more important things.
[+] pasbesoin|13 years ago|reply
Keep in mind, as pg confirmed the other day, HN is still running on a single core.

pg/HN has always been good at prioritizing. (And at identifying his interests versus yours, which may often but not always overlap.) Also, there are ready workarounds, if you must, e.g. load the linked page of interest into a new tab before it expires.

This is something people liked about HN, including early on. That pg would make useful decisions and then not take / cave in to cr-p about them.

I'll live with the expiring links, if and as it makes other parts of managing HN easier.

P.S. As I reflect, is some of the increased discomfort and agitation from users on this point due to an increase in mobile browsing, where such user-initiated workarounds face a more cumbersome UI? (Not all, but some.)

[+] ralph|13 years ago|reply
It was a lot worse in that past but pg cut back on the number of fnids being generated, switching to more conventional methods. There's still quite a few around though and obviously increased traffic, meaning more new fnids to store, puts pressure on the cache.
[+] thatusertwo|13 years ago|reply
I didn't know it was a problem, just assumed it was a way of making sure the next page's content is up to date.
[+] anamax|13 years ago|reply
Complaining about HN's site operation strikes me as being similar to complaining about Marilyn Monroe's driving skills.
[+] vkkan|13 years ago|reply
you mean in the web app? or in your blogs ? for both we can have small service to use broken link checker from w3 site will do right ?
[+] szajbus|13 years ago|reply
He means Hacker News' next page link. The URL you get by clicking it is unique to you and it preserves order of stories when you navigate from one page to next, but it has short lifetime.