top | item 2306319

Saved 10 billion DNS queries per month by disabling DNS Prefetching

159 points| necro | 15 years ago |pinkbike.com | reply

50 comments

order
[+] Xk|15 years ago|reply
That's all very nice, but it seems to do more with the fact that each person gets his/her own subdomain. And given that I found three xss's in about five minutes [edit: and then like ten more in the next five seconds, after realizing any input box works] doesn't give me confidence in their abilities.

http://www.pinkbike.com/news/search/?q=%3C%2Ftitle%3E%3Cscri...

http://www.pinkbike.com/product/compare/?items=466,%22%3E%3C...

http://www.pinkbike.com/photo/list/?date=all&text=%3C/ti...

http://www.pinkbike.com/buysell/list/?q=%3Cscript%3Ealert%28...

http://www.pinkbike.com/forum/search/?q=%3C/title%3E%3Cscrip...

Edit: I've stopped adding xss's. It's actually harder to find input boxes which don't lead to xss's than ones which do.

[+] Xk|15 years ago|reply
Whoever it was that fixed the XSS (necro?), I'm impressed how fast that was.

But please don't rely on just escaping < and >. You have to worry about double-quotes too, I can end a string and add a "onload" or "onfocus" attribute if it's already in a tag. And sometimes you have to worry about single quotes. In fact, there's a lot to take a look at.

Instead of just fixing the case at hand, try to be proactive about it. Check to make sure you don't have anything else.

Edit: Click the search box, for example. http://www.pinkbike.com/forum/search/?q=%22%20onclick=%22ale...

[+] ars|15 years ago|reply
Summary (he really should have said this part upfront):

He is (ab)using subdomains, by giving every user a different subdomain on his site. So a typical page can have links to hundreds of different subdomains on the page.

For a normal site disabling prefetching is not necessary or a good idea.

This article should be renamed: "Why using hundreds of unique subdomains is not a good idea." Besides all the DNS queries, you are also messing with caches.

I don't know anything about his site, but I don't see any reason that every user needs a unique subdomain.

[+] bodyfour|15 years ago|reply
One reason to do per-user domains is to prevent cookie leakage caused by user-specified content. A few years ago LiveJournal got bitten by a firefox bug that allowed JS execution inside CSS. Since they allowed each user to style their own page that meant that a malicious user could make the CSS at "www.livejournal.com/~evil_user" capture the viewer's www.livejournal.com cookies. That's why they moved everybody to username.livejournal.com, which previously had been a pay-only feature.

I think the really perplexing about this article is why browsers are so aggressive with DNS prefetching. If there are links on the page to 300 different domains, is there much benefit to precaching ALL of them? It seems clear that very few of the domains are likely to be clicked on. If anything they're probably hurting performance by bombarding the local nameserver with a flood of requests.

It sounds like browsers need a better precaching heuristic.

[+] necro|15 years ago|reply
It's a good point, and in retrospect I may not have used the username.example.com pattern, but this was a choice made may years ago. Users seem to like this, and unfortunately these recent changes in browsers make this pattern less desirable. Some other sites that do this subdomain style are the likes of tumblr.
[+] davidu|15 years ago|reply
The real issue is that the cost per query is negligible here and DNS providers shouldn't charge by the query yet almost all of them do. They have to have some metric to separate the big guys from the little guys and charge the big guys more, but this is a crude way to do it.

Consider that publishers often have little control of how many DNS requests they get, so to charge for something out of your control seems utterly bizarre to me. Nice to see in this instance, publisher was able to make a meaningful improvement.

Also keep in mind, I used to run the largest free DNS service in the world so I'm well aware of what I'm talking about and am totally biased on these matters. :-)

[+] necro|15 years ago|reply
The thing to note is that implementation of browser prefetching is further putting DNS requests out of the control of the publishers. 10 billion queries was something like 5000 dns req/s and that is considerable resources of some dns service. I wish dns was not per query costs, but it would seem that is the metric that resources would be based on. The point of the article is to let people know that if you fall into this dns pattern, it maybe a result of the recent prefetching done by browsers, and you may have a way to resolve the issue.
[+] otterley|15 years ago|reply
Perhaps a per-query charge isn't optimal, but how else can a provider of a service equate supply with demand?

More load implies additional cost, and DNS providers are no exception; there is an incremental cost of serving a DNS response. Granted, it's small, but real nonetheless.

One could charge based on bytes transferred instead, but charging for bytes is functionally equivalent to charging for responses.

[+] coderdude|15 years ago|reply
I'm surprised the article doesn't actually tell you the meta http-equiv to disable DNS prefetching. It mentions that it helped out tremendously, though. Here it is:

<meta http-equiv="x-dns-prefetch-control" content="off" />

or, if you're more into HTML5:

<meta http-equiv="x-dns-prefetch-control" content="off">

[+] buro9|15 years ago|reply
I prefer HTTP headers instead of meta tags where possible.

An example solution at the load balancer level, assuming the use of Varnish:

In vcl_fetch:

  if (req.url ~ "\.(htm|html|php)" || req.url ~ "\/(\?.*)?$") {
    set beresp.http.disable-dns-prefetch = "1";
  }
In vcl_deliver:

  if (resp.http.disable-dns-prefetch) {
    remove resp.http.disable-dns-prefetch;
    set resp.http.X-DNS-Prefetch-Control = "off";
  }
[+] rodion_89|15 years ago|reply
Not closing the tag is nothing specific to HTML5, it just means you aren't strict on XHTML. HTML4 and others would be the same.
[+] joshfraser|15 years ago|reply
Before everyone goes rushing off to disable DNS prefetching, remember that DNS prefetching is generally a good thing that exists to make websites faster. And the faster your site is the more pageviews you can expect. Faster sites also have a lower bounce rate and better pagerank from Google.
[+] ssp|15 years ago|reply
Does Google take DNS prefetching into account when they measure the speed of a web site?
[+] Rantenki|15 years ago|reply
This implies that providers are either severely limiting their caches, or expiring in a shorter than posted TTL. Even though pinkbike looks like it has thousands of users, one would expect the front pages to be largely identical for most user sessions, so the ISP dns caches should already have most of those username.domain.com records cached. Either that or the ISP's DNS servers are more numerous and distributed, with fewer customers each, or something like that.

Anybody at an ISP that can fill us in on DNS TTL mangling or cache limiting?

[+] metageek|15 years ago|reply
When I worked on Akamai's DNS server, I was told that most ISPs cached much more aggressively than we told them to. We would set TTLs at...I want to say 5 seconds...because we were directing traffic based on real-time network conditions; but most ISPs would cache for at least an hour.

Exact numbers may be off, of course; it was a few years ago.

[+] patrickgzill|15 years ago|reply
I am actually rather shocked that people are being charged extra for DNS - surely the answer is to get any sort of cheap VPS and put DNS on that box? Then again, why are you depending for all aspects of your site's working at all, on a service which costs $2/month?

Even a 128MB RAM VPS could comfortably handle a huge number of requests.

[+] arbitrarywords|15 years ago|reply
Depends on the location of your users. Quicker to have someone host it is multiple places all over the world (assuming you care about perceived page load speed)
[+] datums|15 years ago|reply
Create an A Record for your highest queried subdomain. Those will be cached and eventually decrease the number of queries. Depending on the number of subdomain you can create and remove it on signup and cancellation.
[+] ck2|15 years ago|reply
short answer for Firefox

   about:config 
   network.dns.disablePrefetch             (true)
   network.dns.disablePrefetchFromHTTPS    (true)
[+] ck2|15 years ago|reply
For those downvoting, did you want the server-side solution?

    <meta http-equiv="x-dns-prefetch-control" content="off" />
[+] hartror|15 years ago|reply
But that is a good thing for me!
[+] andrewcooke|15 years ago|reply
ok i'm confused. why doesn't dns wildcard caching solve this?
[+] chrisbolt|15 years ago|reply
DNS wildcards are server-side, so there's no way for the client to know that the response they got is for *.domain.com but not www.domain.com.
[+] gcb|15 years ago|reply
+ saved 10 billion DNS queries per month.

- wasted a total of 10sec of each user's time per month.

it's all about trade offs

[+] chrisbolt|15 years ago|reply
In our (deviantART's) case, the 24-60 links to subdomains were set up so if you clicked them, they dynamically load with javascript on the client side, never actually hitting that subdomain. No time wasted on the user's side, since the browser was prefetching domains that were rarely being hit.
[+] Rantenki|15 years ago|reply
No, it's not; the issue is that the domains prefetch REGARDLESS of whether the links are ever clicked on. The browser is pre-fetching in case the user DOES click on the link. This really verges on being a browser bug, esp. since in firefox's case, it fetches both A and AAAA records.
[+] blantonl|15 years ago|reply
you have neglected to take into consideration the cost of running a hosted DNS. Costs for DNS service for any large traffic site is an important consideration.
[+] togasystems|15 years ago|reply
As a fellow Mountain biker, I was surprised and glad that I saw pinkbike on HN.