Contrived example: a Puppet script that configures a VM with DNS servers, which are not otherwise provided in the image or by DHCP. You need to get that script itself somehow.
I know a couple of people who work on locked-down networks with firewalls that require permitted sites to be specified by IP address. If you're not confident that DNS will always return the same IP address (e.g. round-robin DNS servers) a common tactic is add a specific IP to the hosts file to make sure it always matches the rule in the firewall.
Sometimes, DNS will break for me (someone's DNS server will get hit by a DDOS, say), and I'll add the appropriate records to my /etc/hosts to work around it. I may not notice that I've done this until several years later when it stops working because their IP address changed.
For what it's worth, we have an API endpoint that has had the right IP network listed ever since it went active. If you _need_ to verify IPs for whatever reason, use this:
I think the more likely explanation is the behavior I have seen in IE8 and below where DNS entries get cached and as long as the server responds correctly, they cache is not busted. On one particular upgrade we brought the TTL down to 60 seconds and had to wait three days for the traffic to subside.
Also, at least older versions of nginx in reverse proxying mode have the same issue. Had to send the nginx processes a SIGHUP or restart them entirely, don't remember which, to get them to use the new IP's.
I recently learned the hard way that Firefox has a cache that does the same thing. Obsolete documentation lists a 60 second cache but I watched it making requests to the wrong server almost an entire day after the (3 hour TTL) records were changed. And yes, I checked that other apps showed the right IP.
I believe some ISPs have a TTL threshold under which they use their internal default. i.e. a TTL of 60 seconds would be ignored and become 86400, whereas a 3600 second TTL might still be respected.
We check a request's IP address for blocks owned by GitHub as a form of authentication for service hooks to consume commits and implement continuous integration/delivery on a project.
They list these blocks in the service hooks configuration page. Are these address blocks also due for change/deprecation?
[+] [-] kojoru|12 years ago|reply
[+] [-] voidlogic|12 years ago|reply
theroetical gains?
* They can try to pick a server near them (I don't think the servers are geo-dist)
* No external DNS lookup (silly, whats a one-time cached 150 ms between friends)
* Immune to DNS outage (lol, if your DNS is working other stuff is prob broke..)
* More resistant to someone pretending to be github (expect that github uses HTTPS..)
So really no point I can think of... but is sure makes you computer brittle to github changing anything!
[+] [-] crb|12 years ago|reply
[+] [-] RobAley|12 years ago|reply
[+] [-] lambda|12 years ago|reply
[+] [-] technoweenie|12 years ago|reply
http://developer.github.com/v3/meta/
[+] [-] IgorPartola|12 years ago|reply
Also, at least older versions of nginx in reverse proxying mode have the same issue. Had to send the nginx processes a SIGHUP or restart them entirely, don't remember which, to get them to use the new IP's.
[+] [-] Dylan16807|12 years ago|reply
https://bugzilla.mozilla.org/show_bug.cgi?id=861273
https://bugzilla.mozilla.org/show_bug.cgi?id=709976
[+] [-] ceejayoz|12 years ago|reply
[+] [-] joaomsa|12 years ago|reply
They list these blocks in the service hooks configuration page. Are these address blocks also due for change/deprecation?
[+] [-] saidajigumi|12 years ago|reply
[+] [-] wink|12 years ago|reply