"Chrome also tries to find out if someone is messing up with the DNS (i.e. “nasty” ISPs that have wildcard DNS servers to catch all domains). Chrome does this by issuing 3 DNS requests to randomly generated domain names,"
I have a question. Why would wilcard domains be 'nasty'? It's not that unusual, is it? Would Google discourage this?
I can imagine that a site uses wildcard domains to differentiate in the content it should serve, for example a blog provider: john.bloggo.com, luyt.bloggo.com, johndoe.bloggo.com etc... You wouldn't want to put a few thousand customer names in your DNS tables, instead use *.bloggo.com and let the webapp interpret the subdomain name.
Or in the case of some development server, ftp.domain.com, svn.domain.com, git.domain.com and www.domain.com maybe all should resolve to the same IP and find the corresponding service on that machine. If the admin were to add another service, say SMTP, he could just install a mailer on port 25 and he wouldn't have to adjust the DNS.
Yes, I think so. When ISPs wildcard all domains, I think the typical reason is so that when you've typed an invalid name, they return a page with search results guessing at your intent. If I'm right (I might not be), then Google has plenty of incentive to stop that from happening, or at least know that it is happening.
First, those pages with search results are often ugly, full of ads, poorly formatted, etc. Google would like you to have a nice experience in Chrome so that you're a happy user. If you see an ugly, messy ad-ridden page (especially if the ads aren't Google ads) every time you make a typing mistake, Google loses out.
There are probably other more subtle and nuanced reasons that Google would dislike that practice, but I'm not really sure of specifics. (In fact, I'm not entirely sure that DNS wildcarding is even related to those ISP search pages...but I think it is.)
I have a question. Why would wilcard domains be 'nasty'?
It's not that unusual, is it? Would Google discourage this?
You're thinking of wildcard subdomains, where a [[domain.com]]'s naming servers are configured so *.domain.com resolves.
Google is taking action against wildcard domains, where the nameserver resolves every domain to some IP. Valid domains are (usually) resolved to the correct IP, invalid domains are resolved to the ISP's ad-laden search page.
Wildcard domains (*.bloggo.com) aren't nasty. Wildcard DNS servers are. They inject domain name data into zones owned by other entities; if you type news.ycombinator.com, you get this server, but nwes.ycombinator.com gives me a JS redirect to www.ayudaenlabusqueda.com.ar/?query=nwes.
"Chrome automatically tries to determine if the user typed in a domain and tries to resolve it in the background."
When I first switched my windows ipv4 dns settings to use opennic i noticed:
chrome knows that a string typed into the address bar (one that isn't already eventually found in history or typed before) that ends with any normal suffix .com .uk etc. is a url, instead of just a query to be sent to the default search engine.
but it doesn't know what to do with strings that have opennic only suffixes like .free on them.
unless i make it clear that i am typing a url and not a query, by putting http:// on the front, it doesn't know that it is a url, because it has the non-standard .free or something on the end.
typing example.free or www.example.free for the first time just sends it to the search engine. so it seems chrome only knows/validates the official dns suffixes.
I noticed the random DNS requests the other day when I had to tunnel my traffic over ssh from the local library, but didn't think to figure out what the requests were. Interesting that this article shows up now.
Chrome should stick to its true nature, do one thing well, and leave the DNS stuff (refetch after..., the weird anti-nasty system explained in the post) to a DNS cache daemon. Google could include one in Chrome OS, and don't turn Chrome (the browser) into a big pile of bloat.
Anti-nasty system, maybe. The prefetching thing can't possibly be implemented by the DNS cache daemon because it doesn't know what the user is currently typing.
So instead of shipping one executable with code X + Y they should ship two executables: one with code X + one with Y + Z, increasing the overall complexity of what they ship and adding the inevitable "bloat" which would come from extracting DNS code into a deamon and additional code to communicate between Chrome and that deamon (the Z part).
That doesn't sound like a sound engineering practice.
[+] [-] Luyt|15 years ago|reply
I have a question. Why would wilcard domains be 'nasty'? It's not that unusual, is it? Would Google discourage this?
I can imagine that a site uses wildcard domains to differentiate in the content it should serve, for example a blog provider: john.bloggo.com, luyt.bloggo.com, johndoe.bloggo.com etc... You wouldn't want to put a few thousand customer names in your DNS tables, instead use *.bloggo.com and let the webapp interpret the subdomain name.
Or in the case of some development server, ftp.domain.com, svn.domain.com, git.domain.com and www.domain.com maybe all should resolve to the same IP and find the corresponding service on that machine. If the admin were to add another service, say SMTP, he could just install a mailer on port 25 and he wouldn't have to adjust the DNS.
[+] [-] cryptoz|15 years ago|reply
Yes, I think so. When ISPs wildcard all domains, I think the typical reason is so that when you've typed an invalid name, they return a page with search results guessing at your intent. If I'm right (I might not be), then Google has plenty of incentive to stop that from happening, or at least know that it is happening.
First, those pages with search results are often ugly, full of ads, poorly formatted, etc. Google would like you to have a nice experience in Chrome so that you're a happy user. If you see an ugly, messy ad-ridden page (especially if the ads aren't Google ads) every time you make a typing mistake, Google loses out.
There are probably other more subtle and nuanced reasons that Google would dislike that practice, but I'm not really sure of specifics. (In fact, I'm not entirely sure that DNS wildcarding is even related to those ISP search pages...but I think it is.)
[+] [-] jmillikin|15 years ago|reply
Google is taking action against wildcard domains, where the nameserver resolves every domain to some IP. Valid domains are (usually) resolved to the correct IP, invalid domains are resolved to the ISP's ad-laden search page.
[+] [-] SpikeGronim|15 years ago|reply
[+] [-] kragen|15 years ago|reply
[+] [-] aj700|15 years ago|reply
"Chrome automatically tries to determine if the user typed in a domain and tries to resolve it in the background."
When I first switched my windows ipv4 dns settings to use opennic i noticed:
chrome knows that a string typed into the address bar (one that isn't already eventually found in history or typed before) that ends with any normal suffix .com .uk etc. is a url, instead of just a query to be sent to the default search engine.
but it doesn't know what to do with strings that have opennic only suffixes like .free on them.
unless i make it clear that i am typing a url and not a query, by putting http:// on the front, it doesn't know that it is a url, because it has the non-standard .free or something on the end.
typing example.free or www.example.free for the first time just sends it to the search engine. so it seems chrome only knows/validates the official dns suffixes.
[+] [-] hk9565|15 years ago|reply
[+] [-] soult|15 years ago|reply
http://publicsuffix.org/
[+] [-] uxp|15 years ago|reply
If anyone is interested, my auth.log shows: https://gist.github.com/797184
[+] [-] jacquesm|15 years ago|reply
The owner of www.go can't be all that happy with the flood of requests!
[+] [-] dchest|15 years ago|reply
[+] [-] requinot59|15 years ago|reply
[+] [-] comex|15 years ago|reply
[+] [-] kjksf|15 years ago|reply
That doesn't sound like a sound engineering practice.