(no title)
jcrites | 1 year ago
It's nice that this is available, but if I was building a new system today that was internal, I'd use a regular domain name as the root. There are a number of reasons, and one of them is that it's incredibly nice to have the flexibility to make a name visible on the Internet, even if it is completely private and internal.
You might want private names to be reachable that way if you're following a zero-trust security model, for example; and even if you aren't, it's helpful to have that flexibility in the future. It's undesirable for changes like these to require re-naming a system.
Using names that can't be resolved from the Internet feels like all downside. I think I'd be skeptical even if I was pretty sure that a given system would not ever need to be resolved from the Internet. [Edit:] Instead, you can use a domain name that you own publicly, like `example.com`, but only ever publish records for the domain on your private network, while retaining the option to publish them publicly later.
When I was leading Amazon's strategy for cloud-native AWS usage internally, we decided on an approach for DNS that used a .com domain as the root of everything for this reason, even for services that are only reachable from private networks. These services also employed regular public TLS certificates too (by default), for simplicity's sake. If a service needs to be reachable from a new network, or from the Internet, then it doesn't require any changes to naming or certificates, nor any messing about with CA certs on the client side. The security team was forward-thinking and was comfortable with this, though it does have tradeoffs, namely that the presence of names in CT logs can reveal information.
ghshephard|1 year ago
It's much the same reason why some very large IPv6 services deploy some protected IPv6 space in RFC4193 FC::/7 space. Of course you have firewalls. And of course you have all sorts of layers of IDS and air-gaps as appropriate. But, if by design you don't want to make this space reachable outside the enterprise - the extra steps are a belt and suspenders approach.
So, even if I mess up my firewall rules and do leak a critical control point: FD41:3165:4215:0001:0013:50ff:fe12:3456 - you wouldn't be able to route to it anyways.
Same thing with .internal - that will never be advertised externally.
nox101|1 year ago
unknown|1 year ago
[deleted]
quectophoton|1 year ago
That assumes you are able to pay to rent a domain name, and keep paying for it, and that you are reasonably sure that the company you're renting it from is not going to take it away from you because of a selectively-enforced TOS, and that you are reasonably sure that both yourself and your registrar are doing anything possible to avoid getting your account compromised (resulting in your domain being transferred to someone else's and probably lost forever unless you can take legal action).
So it might depend on your threat model.
Also, a good example, and maybe the main reason for this specific name instead of other proposals, is that big corps are already using it (e.g. DNS search domains in AWS EC2 instances) and don't want someone else to register it.
justin_oaks|1 year ago
Of course, this is a bad idea, but it does allow you to avoid the "rent".
briHass|1 year ago
HA allows you to use a self-signed cert, but if you turn on HTTPS, your webhook endpoints must also use HTTPS with that cert. The security camera doesn't allow me to mess with its certificate store, so it's not going to call a webhook endpoint with a self-signed/untrusted root cert.
Sure, I could probably run a HTTP->HTTPS proxy that would ignore my cert, but it all starts to feel like a massive kludge to be your own CA. Once again, we're stuck in this annoying scenario where certificates serve 2 goals: encryption and verification, but internal use really only cares about the former.
Trying to save a few bucks by not buying a vanity domain for internal/test stuff just isn't worth the effort. Most systems (HA included) support ACME clients to get free certs, and I guess for IoT stuff, you could still do one-off self-signed certs with long expiration periods, since there's no way to automate rotation of wildcards for LE.
yjftsjthsd-h|1 year ago
Depending on your threat model, I'm not sure that's true. Encryption without verification prevents a passive observer from seeing the content of a connection, but does nothing to prevent an active MITM from decrypting it.
xp84|1 year ago
bawolff|1 year ago
mnahkies|1 year ago
One of the (relatively few) things that frustrate me about GKE is the integration with GCP IAP and k8 gateways - it's a separate resource to the http route and if you fail to apply it, or apply one with invalid configuration then it fails open.
I'd much prefer an interface where I could specify my intention next to the route and have it fail atomically and/or fail closed
zrm|1 year ago
Well sure you can. You expose your internal DNS servers to the internet, or use the same DNS servers for both and they're on the internet. The root servers are not going to delegate a request for .internal to your nameservers, but anybody can make the request directly to your servers if they're publicly accessible.
thebeardisred|1 year ago
When someone embeds https://test.internal with a cert validation turned off (rather then fingerprint pinning or setting up an internal CA) in their mobile application that client will greedily accept whatever response is provided by their local resolver... Correct or malicious.
samstave|1 year ago
leeter|1 year ago
Another reason is information leakage. Having DNS records leak could actually provide potential information on things you'd rather not have public. Devs can be remarkably insensitive to the fact they are leaking information through things like domains.
jcrites|1 year ago
This is true, but using a regular domain name as your root does not require you to actually publish those DNS records on the Internet.
For example, say that you own the domain `example.com`. You can build a private service `foo.example.com` and only publish its DNS records within the networks where it needs to be resolved – in exactly the same way that you would with `foo.internal`.
If you ever decide that you want an Internet-facing endpoint, just publish `foo.example.com` in public DNS.
zzo38computer|1 year ago
macromaniac|1 year ago
These local TLDs should IMO be used on all home routers, it fixes a lot of problems.
If you've ever plugged in e.g. a raspberry pi and been unable to "ping pi" it it's because there is no DNS mapping to it. There are cludges that Windows, Linux, and Macs use to get around this fact, but they only work in their own ecosystem, so you often can't see macs from e.g. windows, it's a total mess that leads confusing resolution behaviour, you end up having to look in the router page or hardcode the IP to reach a device which is just awful.
Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.
Also, p. sure I grew up playing wc3 w you?
e28eta|1 year ago
dnsmasq has this feature. I think it’s commonly available in alternative router firmware.
On my home network, I set up https://pi-hole.net/ for ad blocking, and it uses dnsmasq too. So as my network’s DHCP + DNS server, it automatically adds dns entries for dhcp leases that it hands out.
There are undoubtably other options, but these are the two I’ve worked with.
johannes1234321|1 year ago
See for instance the trouble with AVM's fritz.box domain, which was used by their routers by default, then .box wasade an TLD and AVM was too late to register it.
colejohnson66|1 year ago
pid-1|1 year ago
I've been on the other end of the business scale for the past decade, mostly working for SMBs like hedge funds.
That made me a huge private DNS hater. So much trouble for so little security gain.
Still, it seems common knowledge is to use private DNS for internal apps, AD and such, LAN hostnames and likes.
I've been using public DNS exclusively everywhere I've worked and I always feel like it's one of the best arch decisions I'm bringing to the table.
JackSlateur|1 year ago
And the larger the scale, to more benefits you get from avoiding internal-specific resolution.
TheRealPomax|1 year ago
layer8|1 year ago
slashdave|1 year ago
On the contrary, it is helpful to make this is impossible. Otherwise you invite leaking private info by configuration mistake.