For the last weeks until a few days ago I've been unable to access twitter unless doing a hard refresh on every click. Because they installed a faulty service worker of some kind that would break most of the requests. (Both on desktop and android)
And on Android, every time I follow a link to Twitter in an app that opens in a web view, it gives me a faulty page that I have to refresh a few times before I'm able to view it. It loads the page fine, but some rest call or whatever to fetch the tweet crashes.
Edit: Not heard many others complain about this before, so mainly thought it was something about my setup. But the huge amount of upvotes this got suggests some Twitter engineers better look into this..
> And on Android, every time I follow a link to Twitter in an app that opens in a web view, it gives me a faulty page that I have to refresh a few times before I'm able to view it. It loads the page fine, but some rest call or whatever to fetch the tweet crashes.
I get this issue too, I figured it was part of some trick to make me want to use the app.
Thank you for mentioning this. I've been getting frustrated with my Firefox install for a while because everything is so slow and it eats so much memory. After deleting the 200 service workers [1] I didn't even realize were installed, it is taking wildly less memory and seems generally snappier. I wouldn't call it "night and day", but it's faster.
I had been under the impression that service workers required approval from the user to be installed. I had service workers from websites that I haven't visited in years and don't even exist anymore, sitting there chewing on my RAM every time I started Firefox.
Also Twitter has also been broken for me this way for months.
Service workers have some nice features, but I submit that they need to be something to be whitelisted by users, not something websites can just toss in there whenever they feel like it. The odds of one of them screwing up and eating far more resources than they have any right to approach 1 as the number of them increases. I don't know exactly how they managed to collectively eat 2GB of RAM and I don't care; there is no way they were bringing that much value to me, especially as I permit no website to simply push me notifications.
[1]: A suspiciously round number; it was exactly 200. Is that a limit? I don't see it in about:config.
So it wasn't just my browser or my internet connection. I used CTRL + F5 in FF and it always worked, but on a normal refresh it would either display an error or the page didn't load at all.
Thought that my IP had landed on some list where Twitter degraded the connection and I wasn't even logged in.
In Safari it will just drop you to an error page 70% of the time you navigate to it through a link, I assume because it doesn’t get its tasty tracking data. You have to reload the page to get anything to show up.
This has been my experience on mobile twitter for years, and in the last year on the desktop one as well. I thought it was a passive aggressive push for users to log in or install the official apps.
I don't use Twitter except when I'm linked to it and for occasional searches, and it's been a huge step backwards with the new UI that was forced on everyone a few months ago. It even has a loading screen because it's much slower, and somehow also manages to show less content than the previous design.
If I remember correctly there was a brief period a while ago when they did use a JS-only "web app", but then switched back to simple HTML with JS enhancements (and I seem to remember they announced that change with much pride).
Of course, there is always mobile.twitter.com that is still static-only and quite usable without JS, but IMHO Twitter is the perfect example of how the "modern web" is doing less with more.
Wow, I guess this is common. This has happened to me for months using Chrome on Ubuntu with UBlock & Privacy Badger. I always assumed it was the extensions but I guess not.
Twitter is literally so buggy. Most of the time I have to refresh a page several times. Maybe it's because I don't have an account and refuse to use their apps?
I've been experiencing something similar, every time I click on a twitter link from outside of Twitter, Firefox errors with "There has been a protocol violation" error and a hard refresh fixes it.
I'm not sure whether it's because I've setup FF containers to open all twitter.com links in a separate container.
Service workers are hard to get right and if they've for example updated the service worker, it might be damn near impossible to unregister the old one and installing the newer one before the old worker is set to be cleared from the cache.
No freakin way... this whole month I thought twitter was broken... I figured it was some censorship related filter. Every visit (for me at least) requires a reload to work properly.
I’ve noticed something weird moving between hacker news and Twitter safari on my iPhone 7. If I first visit Twitter then in the same tab go to hacker news. The favicon is the Twitter one but the tab says hacker news. This doesn’t work with other sites like google that I tested. It’s quite odd. It’s like the favicon for Twitter overrides the hacker news one for the first page load
I have the same issue with Firefox on MacOS. 3rd party cookies disabled, privacy protection on, uBlock Origin, and PiHole. I assume it's a poorly handled call to a tracking or ad domain that fails, blowing it up.
I have to refresh the page once or twice when attempting to load a tweet in Safari or Chrome on iOS, otherwise I usually get a cryptic "this tweet is not available to you" message or something similar. I also just assumed it was a dark pattern to get me to use the app.
> It strips out the www. prefix to make a ”display version“ of the URL. I have no problem with this, as the prefix is entirely meaningless to humans. It does serve important technical functions, however.
I know most people don’t know the difference, and it would generally be a bad idea to have your www not redirect to the bare domain (or vice versa), but personally I prefer when we don’t hide these things. Just a bit of pedantic correctness, I guess.
> I can’t look at older versions of Twitter, as its pages don’t work well in the Internet Archive’s Wayback Machine.
All of this gets to me. A www subdomain is _not_ necessarily equivalent and interchangeable with the apex domain; treating them as equivalent is presumptuous in the extreme.
> Twitter redirects links through its t.co link-shortening service. It was once a useful addition to its service as it helped people stay underneath the strict character limits. The link shortener reduced all links to 23 characters. Twitter gains some more data-insights about its users in the form of click-stream data and insight into popular links.
The t.co link also helps them block URLs that they deem problematic on their platform - in the event of spam, attack, or abuse, the redirect can instead be a black hole.
Well I learned something that's not in the article.
I thought browsers requested a domain lookup (gethostbyname()) and basically got back a "zone file" which would have the cnames in. So, I was confused, when people complained about Twitter forcing a domain lookup "on the wrong domain" as I was assuming this would at least cache the domain lookup: It's the right domain, of course, but the lookup for an address on a subdomain includes the subdomain and then gets the cname directing wherever.
It always confused me that dig/nslookup didn't seem to provide all the info. They can, using nslookup 'ls example.com' or 'dig example.com -t AXFR' but the server in general refuses to serve the zone file (seemingly for security by obscurity reasons).
So, for example, if the browser looks up example.com it doesn't get that there is a cname from www->example.com . It only gets that relationship from looking up "www.example.com".
So, TIL, and now results provided by dig/nslookup on the command line make more sense!
Almost everyone uses a “recursive DNS resolver“ provided by their ISP, or one of the big ones from Google/Cloudflare/Cisco. The recursive resolver does all the hard work of resolving the root, top-leevl-domain, the apex domain, the subdomain, any CNAMEs, and finally find the right IP addresses to respond to the DNS client. Recursive resolvers benefit a lot from caching responses at each stage of the chain, the same way your browser/OS/router DNS client benefit from caching the final responses from the recursive resolvers. If you run a full DNS resolver, you have do to all of these steps locally.
How does the preconnect work with the t.co redirect in between? t.co will return a 301 right? Then we see the real domain, then the browser can preconnect to the server, not earlier, or can it?
Twitter run t.co so they know where the redirect goes without actually asking t.co. So they can preconnect to the target domain as well. And that's where they goofed up (I mean, apart from preconnecting to all these domains in the first place).
[+] [-] matsemann|5 years ago|reply
For the last weeks until a few days ago I've been unable to access twitter unless doing a hard refresh on every click. Because they installed a faulty service worker of some kind that would break most of the requests. (Both on desktop and android)
And on Android, every time I follow a link to Twitter in an app that opens in a web view, it gives me a faulty page that I have to refresh a few times before I'm able to view it. It loads the page fine, but some rest call or whatever to fetch the tweet crashes.
Edit: Not heard many others complain about this before, so mainly thought it was something about my setup. But the huge amount of upvotes this got suggests some Twitter engineers better look into this..
[+] [-] the_only_law|5 years ago|reply
I get this issue too, I figured it was part of some trick to make me want to use the app.
[+] [-] jerf|5 years ago|reply
I had been under the impression that service workers required approval from the user to be installed. I had service workers from websites that I haven't visited in years and don't even exist anymore, sitting there chewing on my RAM every time I started Firefox.
Also Twitter has also been broken for me this way for months.
Service workers have some nice features, but I submit that they need to be something to be whitelisted by users, not something websites can just toss in there whenever they feel like it. The odds of one of them screwing up and eating far more resources than they have any right to approach 1 as the number of them increases. I don't know exactly how they managed to collectively eat 2GB of RAM and I don't care; there is no way they were bringing that much value to me, especially as I permit no website to simply push me notifications.
[1]: A suspiciously round number; it was exactly 200. Is that a limit? I don't see it in about:config.
[+] [-] raxxorrax|5 years ago|reply
Thought that my IP had landed on some list where Twitter degraded the connection and I wasn't even logged in.
[+] [-] saagarjha|5 years ago|reply
[+] [-] Macha|5 years ago|reply
[+] [-] oauea|5 years ago|reply
[+] [-] userbinator|5 years ago|reply
If I remember correctly there was a brief period a while ago when they did use a JS-only "web app", but then switched back to simple HTML with JS enhancements (and I seem to remember they announced that change with much pride).
Of course, there is always mobile.twitter.com that is still static-only and quite usable without JS, but IMHO Twitter is the perfect example of how the "modern web" is doing less with more.
[+] [-] miffe|5 years ago|reply
[+] [-] abstractbarista|5 years ago|reply
Twitter is literally so buggy. Most of the time I have to refresh a page several times. Maybe it's because I don't have an account and refuse to use their apps?
[+] [-] Vvector|5 years ago|reply
[+] [-] djhworld|5 years ago|reply
I'm not sure whether it's because I've setup FF containers to open all twitter.com links in a separate container.
[+] [-] httpsterio|5 years ago|reply
[+] [-] wazoox|5 years ago|reply
[+] [-] imperialdrive|5 years ago|reply
[+] [-] slickrick216|5 years ago|reply
[+] [-] Semaphor|5 years ago|reply
[+] [-] Kye|5 years ago|reply
[+] [-] floatingatoll|5 years ago|reply
[+] [-] Machado117|5 years ago|reply
[+] [-] TimSchumann|5 years ago|reply
[+] [-] curiousllama|5 years ago|reply
[+] [-] driverdan|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] GhostVII|5 years ago|reply
[+] [-] dragonsngoblins|5 years ago|reply
[+] [-] AlwaysRock|5 years ago|reply
[+] [-] nukemandan|5 years ago|reply
Not ideal
[+] [-] chowned|5 years ago|reply
[+] [-] saagarjha|5 years ago|reply
I know most people don’t know the difference, and it would generally be a bad idea to have your www not redirect to the bare domain (or vice versa), but personally I prefer when we don’t hide these things. Just a bit of pedantic correctness, I guess.
> I can’t look at older versions of Twitter, as its pages don’t work well in the Internet Archive’s Wayback Machine.
Now this really gets to me.
[+] [-] chrisweekly|5 years ago|reply
[+] [-] megous|5 years ago|reply
[+] [-] AstralStorm|5 years ago|reply
[+] [-] dtech|5 years ago|reply
That might be why they don't add the subdomain, because adding a unique subdomain to track a user. is free and a domain isn't.
[+] [-] chinathrow|5 years ago|reply
"Technically, it only preconnects when you hover over the link."
[+] [-] lgats|5 years ago|reply
[+] [-] arusahni|5 years ago|reply
The t.co link also helps them block URLs that they deem problematic on their platform - in the event of spam, attack, or abuse, the redirect can instead be a black hole.
[+] [-] encom|5 years ago|reply
[+] [-] pbhjpbhj|5 years ago|reply
I thought browsers requested a domain lookup (gethostbyname()) and basically got back a "zone file" which would have the cnames in. So, I was confused, when people complained about Twitter forcing a domain lookup "on the wrong domain" as I was assuming this would at least cache the domain lookup: It's the right domain, of course, but the lookup for an address on a subdomain includes the subdomain and then gets the cname directing wherever.
It always confused me that dig/nslookup didn't seem to provide all the info. They can, using nslookup 'ls example.com' or 'dig example.com -t AXFR' but the server in general refuses to serve the zone file (seemingly for security by obscurity reasons).
So, for example, if the browser looks up example.com it doesn't get that there is a cname from www->example.com . It only gets that relationship from looking up "www.example.com".
So, TIL, and now results provided by dig/nslookup on the command line make more sense!
[+] [-] d2wa|5 years ago|reply
[+] [-] apples_oranges|5 years ago|reply
[+] [-] jzwinck|5 years ago|reply
[+] [-] daodedickinson|5 years ago|reply
[+] [-] outloudvi|5 years ago|reply
* Twitter
* Google (maybe deliberately)
* "The almighty WHATWG" who accepts Google's revision on whatwg/url about this
[+] [-] Khaine|5 years ago|reply
[+] [-] deepstack|5 years ago|reply
[+] [-] deepstack|5 years ago|reply
[+] [-] deepstack|5 years ago|reply
[+] [-] netsharc|5 years ago|reply
Hah, what a dumb comment. Let's just go back to AOL keywords... but we can call it Google keywords.
I propose a new URL scheme: "web:nytimes/some/article/". Sadly I don't work for the Chrome team, so I can't just force it down the web's throat.
[+] [-] hk__2|5 years ago|reply
This is already what we have, except that "web" is "http" and we have TLDs to namespace domains.