top | item 41205444

(no title)

jcrites | 1 year ago

Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

It's nice that this is available, but if I was building a new system today that was internal, I'd use a regular domain name as the root. There are a number of reasons, and one of them is that it's incredibly nice to have the flexibility to make a name visible on the Internet, even if it is completely private and internal.

You might want private names to be reachable that way if you're following a zero-trust security model, for example; and even if you aren't, it's helpful to have that flexibility in the future. It's undesirable for changes like these to require re-naming a system.

Using names that can't be resolved from the Internet feels like all downside. I think I'd be skeptical even if I was pretty sure that a given system would not ever need to be resolved from the Internet. [Edit:] Instead, you can use a domain name that you own publicly, like `example.com`, but only ever publish records for the domain on your private network, while retaining the option to publish them publicly later.

When I was leading Amazon's strategy for cloud-native AWS usage internally, we decided on an approach for DNS that used a .com domain as the root of everything for this reason, even for services that are only reachable from private networks. These services also employed regular public TLS certificates too (by default), for simplicity's sake. If a service needs to be reachable from a new network, or from the Internet, then it doesn't require any changes to naming or certificates, nor any messing about with CA certs on the client side. The security team was forward-thinking and was comfortable with this, though it does have tradeoffs, namely that the presence of names in CT logs can reveal information.

discuss

order

ghshephard|1 year ago

Number one reason that comes to mind is you prevent the possibility of information leakage. You can't screw up your split-dns configuration and end up leaking your internal IP space if everything is .internal.

It's much the same reason why some very large IPv6 services deploy some protected IPv6 space in RFC4193 FC::/7 space. Of course you have firewalls. And of course you have all sorts of layers of IDS and air-gaps as appropriate. But, if by design you don't want to make this space reachable outside the enterprise - the extra steps are a belt and suspenders approach.

So, even if I mess up my firewall rules and do leak a critical control point: FD41:3165:4215:0001:0013:50ff:fe12:3456 - you wouldn't be able to route to it anyways.

Same thing with .internal - that will never be advertised externally.

quectophoton|1 year ago

> Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

That assumes you are able to pay to rent a domain name, and keep paying for it, and that you are reasonably sure that the company you're renting it from is not going to take it away from you because of a selectively-enforced TOS, and that you are reasonably sure that both yourself and your registrar are doing anything possible to avoid getting your account compromised (resulting in your domain being transferred to someone else's and probably lost forever unless you can take legal action).

So it might depend on your threat model.

Also, a good example, and maybe the main reason for this specific name instead of other proposals, is that big corps are already using it (e.g. DNS search domains in AWS EC2 instances) and don't want someone else to register it.

justin_oaks|1 year ago

If you control the DNS resolution in your company and use an internal certificate authority, technically you don't have to rent a domain name. You can control how it resolves and "hijack" whatever domain name you want. It won't be valid outside your organization/network, but if you're using it only for internal purposes then that doesn't matter.

Of course, this is a bad idea, but it does allow you to avoid the "rent".

briHass|1 year ago

I just got burned on my home network by running my own CA (.home) and DNS for connected devices. The Android warning when installing a self-signed CA ('someone may be monitoring this network') is fine for my case, if annoying, but my current blocker is using webhooks from a security camera to Home Assistant.

HA allows you to use a self-signed cert, but if you turn on HTTPS, your webhook endpoints must also use HTTPS with that cert. The security camera doesn't allow me to mess with its certificate store, so it's not going to call a webhook endpoint with a self-signed/untrusted root cert.

Sure, I could probably run a HTTP->HTTPS proxy that would ignore my cert, but it all starts to feel like a massive kludge to be your own CA. Once again, we're stuck in this annoying scenario where certificates serve 2 goals: encryption and verification, but internal use really only cares about the former.

Trying to save a few bucks by not buying a vanity domain for internal/test stuff just isn't worth the effort. Most systems (HA included) support ACME clients to get free certs, and I guess for IoT stuff, you could still do one-off self-signed certs with long expiration periods, since there's no way to automate rotation of wildcards for LE.

yjftsjthsd-h|1 year ago

> Once again, we're stuck in this annoying scenario where certificates serve 2 goals: encryption and verification, but internal use really only cares about the former.

Depending on your threat model, I'm not sure that's true. Encryption without verification prevents a passive observer from seeing the content of a connection, but does nothing to prevent an active MITM from decrypting it.

xp84|1 year ago

Something you may find helpful: I use a `cloudflared` tunnel to add an ssl endpoint for use outside my home, without opening any holes in the firewall. This way HA doesn’t care about it (it still works on 10.x.y.z) and your internal webhooks can still be plain http if you want.

bawolff|1 year ago

I think there is a benefit that it reduces possibility of misconfiguration. You can't accidentally publish .internal. If you see a .internal name, there is never any possibility of confusion on that point.

mnahkies|1 year ago

Somewhat off topic, but I'm a big fan of fail safe setups.

One of the (relatively few) things that frustrate me about GKE is the integration with GCP IAP and k8 gateways - it's a separate resource to the http route and if you fail to apply it, or apply one with invalid configuration then it fails open.

I'd much prefer an interface where I could specify my intention next to the route and have it fail atomically and/or fail closed

zrm|1 year ago

> You can't accidentally publish .internal.

Well sure you can. You expose your internal DNS servers to the internet, or use the same DNS servers for both and they're on the internet. The root servers are not going to delegate a request for .internal to your nameservers, but anybody can make the request directly to your servers if they're publicly accessible.

thebeardisred|1 year ago

Additionally how do you define publish?

When someone embeds https://test.internal with a cert validation turned off (rather then fingerprint pinning or setting up an internal CA) in their mobile application that client will greedily accept whatever response is provided by their local resolver... Correct or malicious.

samstave|1 year ago

This. And it allows for much easier/trustworthy automated validation of [pipeline] - such as ensuring that something doesnt leak, exfil, egress inadvertently. (even perhaps with exclusive/unique routing?)

leeter|1 year ago

I can't speak for others but HSTS is a major reason. Not everybody wants to deal with setting up certs for every single application on a network but they want HSTS preload externally. I get why for AWS the solution of having everything from a .com works. But for a lot of small businesses it's just more than they want to deal with.

Another reason is information leakage. Having DNS records leak could actually provide potential information on things you'd rather not have public. Devs can be remarkably insensitive to the fact they are leaking information through things like domains.

jcrites|1 year ago

> Having DNS records leak could actually provide potential information on things you'd rather not have public.

This is true, but using a regular domain name as your root does not require you to actually publish those DNS records on the Internet.

For example, say that you own the domain `example.com`. You can build a private service `foo.example.com` and only publish its DNS records within the networks where it needs to be resolved – in exactly the same way that you would with `foo.internal`.

If you ever decide that you want an Internet-facing endpoint, just publish `foo.example.com` in public DNS.

zzo38computer|1 year ago

Sometimes it may be reasonable to use subdomains of other domain names that you have registered, but I would think that sometimes it would not be appropriate, such as if you are not using it with internet at all and therefore should not need to register a domain name, or for other reasons; if it is not necessary to use internet domain names then you would likely want to avoid it (or, at least, I would).

macromaniac|1 year ago

>Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

These local TLDs should IMO be used on all home routers, it fixes a lot of problems.

If you've ever plugged in e.g. a raspberry pi and been unable to "ping pi" it it's because there is no DNS mapping to it. There are cludges that Windows, Linux, and Macs use to get around this fact, but they only work in their own ecosystem, so you often can't see macs from e.g. windows, it's a total mess that leads confusing resolution behaviour, you end up having to look in the router page or hardcode the IP to reach a device which is just awful.

Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.

Also, p. sure I grew up playing wc3 w you?

e28eta|1 year ago

> Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.

dnsmasq has this feature. I think it’s commonly available in alternative router firmware.

On my home network, I set up https://pi-hole.net/ for ad blocking, and it uses dnsmasq too. So as my network’s DHCP + DNS server, it automatically adds dns entries for dhcp leases that it hands out.

There are undoubtably other options, but these are the two I’ve worked with.

johannes1234321|1 year ago

A big area are consumer devices like WiFi routers. They can advertise the .internal name and probably even get TLS certificates for those names and things may work.

See for instance the trouble with AVM's fritz.box domain, which was used by their routers by default, then .box wasade an TLD and AVM was too late to register it.

pid-1|1 year ago

> leading Amazon's strategy for cloud-native AWS usage internally

I've been on the other end of the business scale for the past decade, mostly working for SMBs like hedge funds.

That made me a huge private DNS hater. So much trouble for so little security gain.

Still, it seems common knowledge is to use private DNS for internal apps, AD and such, LAN hostnames and likes.

I've been using public DNS exclusively everywhere I've worked and I always feel like it's one of the best arch decisions I'm bringing to the table.

JackSlateur|1 year ago

Exactly

And the larger the scale, to more benefits you get from avoiding internal-specific resolution.

TheRealPomax|1 year ago

Pretty much "anything that has to use a real network address, resolved via DNS" rather than using the hosts file based loopback device, or the broadcast IP.

slashdave|1 year ago

> it's helpful to have that flexibility in the future

On the contrary, it is helpful to make this is impossible. Otherwise you invite leaking private info by configuration mistake.