Docker has a known security issue with port exposure in that it punches holes through the firewall without asking your permission, see https://github.com/moby/moby/issues/4737
I usually expose ports like `127.0.0.1:1234:1234` instead of `1234:1234`. As far as I understand, it still punches holes this way but to access the container, an attacker would need to get a packet routed to the host with a spoofed IP SRC set to `127.0.0.1`. All other solutions that are better seem to be much more involved.
Containers are widely used at our company, by developers who don't understand underlying concepts, and they often expose services on all interfaces, or to all hosts.
You can explain this to them, they don't care, you can even demonstrate how you can access their data without permission, and they don't get it.
Their app "works" and that's the end of it.
Ironically enough even cybersecurity doesn't catch them for it, they are too busy harassing other teams about out of date versions of services that are either not vulnerable, or already patched but their scanning tools don't understand that.
I wonder how many people realize you can use the whole 127.0.0.0/8 address space, not just 127.0.0.1. I usually use a random address in that space for all of a specific project's services that need to be exposed, like 127.1.2.3:3000 for web and 127.1.2.3:5432 for postgres.
We’ve found this out a few times when someone inexperienced with docker would expose a redis port and run docker compose up on a public accessible machine. Would only be minutes until that redis would be infected. Also blame redis for having the ability to run arbitrary code without auth by default.
-p 127.0.0.1: might not offer all of the protections the way you would expect, and is arguably a bug in dockers firewall rules they're failing to address. they choose to instead say hey we dont protect against L2, and have an open issue here: https://github.com/moby/moby/issues/45610.
this secondary issue with docker is a bit more subtle, it's that they don't respect the bind address when they do forwarding into the container. the end result is that machines one hop away can forward packets into the docker container.
for a home user the impact could be that the ISP can reach into the container. depending on risk appetite this can be a concern (salt typhoon going after ISPs).
more commonly it might end up exposing more isolated work related systems to related networks one hop away
Tbh I prefer not exposing any ports directly, and then throwing Tailscale on the network used by docker. This automatically protects everything behind a private network too.
> "None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet."
This prompts a reflection about, as an industry, we should make a better job in providing solid foundations.
When I check tutorials on how to drill in the wall, there is (almost) no warning about how I could lose a finger doing so. It is expected that I know I should be careful around power tools.
How do we make some information part of the common sense? "Minimize the surface of exposure on the Internet" should be drilled in everyone, but we are clearly not there yet
Tailscale is a great solution for this problem. I too run homeserver with Nextcloud and other stuff, but protected behind Tailscale (Wireguard) VPN. I can't even imagine exposing something like my family's personal data over internet, no matter how convenient it is.
But I sympathize with OP. He is not a developer and it is sad that whatever software engineers produce is vulnerable to script kiddies. Exposing database or any server with a good password should not be exploitable in any way. C and C++ has been failing us for decades yet we continue to use such unsafe stacks.
C and C++ is not accountable for all evil of the world. Yes I know, some Rust evangelists want to tell us that, but most servers get owned through configuration mistakes.
Thanks, I've got a homelab/server with a few layers of protection right now, but had been wanting to just move to a vpn based approach - this looks really nice and turnkey, though I dislike the requirement of using a 3P IDP. Still, promising. Cheers.
If you make a product that is so locked down by default that folks need to jump through 10 hoops before anything works then your support forums will be full of people whining that it doesn't work and everybody goes to the competition that is more plug and play.
Realize why Windows still dominates Linux on the average PC desktop? This is why.
I just switched to Tailscale for my home server just before the holidays and it has been absolutely amazing. As someone who knows very little about networking, it was pretty painless to set up. Can’t really speak to the security of the whole system, but I tried my best to follow best practices according to their docs.
There are a lot of vulnerability categories. Memory unsafety is the origin of some of them, but not all.
You could write a similar rant about any development stack and all your rants would be 100% unrelated with your point: never expose a home-hosted service to the internet unless you seriously know your shit.
For all intents and purposes, the only ports you should ever forward are ones that are explicitly designed for being public facing, like TLS, HTTP, and SSH. All other ports should be closed. If you’re ever reaching for DMZ, port forwarding, etc., think long and hard about what you’re doing. This is a perfect problem for Tailscale or WireGuard. You want a remote database? Tailscale.
I even get a weird feeling these days with SSH listening on a public interface. A database server, even with a good password/ACLs, just isn’t a great safe idea unless you can truly keep on top of all security patches.
I think I'm missing something here - what is specific about Docker in the exploit? Nowhere is it mentioned what the actual exploit was, and whether for example a non-containerized postgres would have avoided it.
Should the recommendation rather be "don't expose anything from your home network publically unless it's properly secured"?
> This was somewhat releiving, as the latest change I made was spinning up a postgres_alpine container in Docker right before the holidays. Spinning it up was done in a hurry, as I wanted to have it available remotely for a personal project while I was away from home. This also meant that it was exposed to the internet, with open ports in the router firewall and everything. Considering the process had been running for 8 days, this means that the infection occured just a day after creating the database. None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet. Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.
Seems like they opened up a postgres container to the Internet (IIRC docker does this whether you want to or not, it punches holes in iptables without asking you). Possibly misconfigured authentication or left a default postgres password?
This is one that can sneak up on you even when you're not intentionally exposing a port to the internet. Docker manages iptables directly by default (you can disable it but the networking between compose services will be messed up). Another common case this can bite you is if using an iptables front-end like ufw and thinking you're exposing just the application. Then unless you bind to localhost then Posgres in this case will be exposed. My recommendation is to review iptables -L directly and where possible use firewalls closer to the perimeter (e.g. the one from your vps provider) instead of solely relying on iptables on the same node
I really like the "VPN into home first" philosophy of remote access to my home IT. I was doing openvpn into my ddwrt router fortunately years, and now it's wireguard into openwrt. It's quite easy for me to vpn in first and then do whatever: check security cams, control house via home assistant, print stuff, access my zfs shared drive, run big scientific simulations or whatever on big computer, etc. The router VPN endpoint is open to attack but I think it's a relatively small attack surface.
Plus, you can obfuscate that too by using a random port for Wireguard (instead of the default 51820): if Wireguard isn't able to authenticate (or pre-authenticate?) a client, it'll act as if the port is closed. So, a malicious actor/bot wouldn't even know you have a port open that it can exploit.
I use WireGuard to access all in-home stuff as well, but there is one missing feature and one bug with the official WireGuard app for android that is inconvenient:
- Missing feature; do not connect when on certain SSIDs.
- Bug: When the WG connection is active and I put my phone in Flightmode (which I do every night), it drains the battery from full to almost empty during the night.
I've taken this approach as well. The WireGuard clients can be configured to make this basically transparent based on what SSID I'm connected to. I used to do similar with IPSec/IKEv2, but WireGuard is so much easier to manage.
The only thing missing on the client is Split DNS. With my IPSec/IKEv2 setup, I used a configuration profile created with Apple Configurator plus some manual modifications to make DNS requests for my internal stuff go through the tunnel and DNS requests for everything else go to the normal DNS server.
My compromise for WireGuard is that all DNS does to my home network but only packets destined for my home subnets go through the tunnel.
> Fortunately, despite the scary log entries showing attempts to change privileges and delete critical folders, it seemed that all the malicious activity was contained within the container.
OP can't prove that. The only way is to scrap the server completely and start with a fresh OS image. If OP has no backup and ansible repo (or anything similar) to configure a new home server quickly, then I guess another valuable lesson was learned here.
Not 100% what you mean with "scrapping" the server, you suggest just a re-install OS? I'd default to assuming the hardware itself is compromised somehow, if I'm assuming someone had root access. If you were doing automated backups from something you assume was compromised, I'm not sure restoring from backups is a great idea either.
A lot of people comment about Docker firewall issues. But it still doesn't answer how an exposed postgres instance leads to arbitrary code execution.
My guess is that the attacker figured out or used the default password for the superuser. A quick lookup reveals that a pg superuser can create extension and run some system commands.
I think the takeaway here is that the pg image should autogenerate a strong password or not start unless the user defines a strong one. Currently it just runs with "postgres" as the default username and password.
> I think the takeaway here is that the pg image should autogenerate a strong password or not start unless the user defines a strong one. Currently it just runs with "postgres" as the default username and password.
Takeaway for beginner application hosters (aka "webmasters") is to never expose something on the open internet unless you're 100% sure you absolutely have to. Everything should default to using a private network, and if you need to accept external connections, do so via some bastion host that isn't actually hosted on your network, which reaches into your private network via proper connections.
Ok - curious if anyone can provide some feedback for me on this.
I am running Immich on my home server and want to be able to access it remotely.
I’ve seen the options of using wireguard or using a reverse proxy (nginx) with Cloudflare CDN, on top of properly configured router firewalls, while also blocking most other countries. Lots of this understanding comes from a YouTube guide I watched [0].
From what I understand, people say reverse proxy/Cloudflare is faster for my use case, and if everything is configured correctly (which it seems like OP totally missed the mark on here), the threat of breaches into to my server should be minimal.
Am I misunderstanding the “minimal” nature of the risk when exposing the server via a reverse proxy/CDN? Should I just host a VPN instead even if it’s slower?
Obviously I don’t know much about this topic. So any help or pointing to resources would be greatly appreciated.
If you care about privacy I wouldn't even consider using Cloudflare or any other CDN because they get to see your personal data in plain "text". Can you can forward a port from the internet to your home network, or are you stuck in some CG-NAT hell?
If you can, then you can just forward the port to your Immich instance, or put it behind a reverse proxy that performs some sort of authentication (password, certificate) before forwarding traffic to Immich. Alternatively you could host your own Wireguard VPN and just expose that to the internet - this would be my preferred option out of all of these.
If you can't forward ports, then the easiest solution will probably be a VPN like Tailscale that will try to punch holes in NAT (to establish a fast direct connection, might not work) or fall back to communicating via a relay server (slow). Alternatively you could set up your own proxy server/VPN on some cheap VPS but that can quickly get more complex than you want it to be.
You don't need any of this, and the article is completely bogus, having a port forwarded to a database in a container is not a security vulnerability, unless the database has a vulnerability. The article fails to explain how they actually got remote code execution, and blames it on some docker container vulnerability, and links to a random article as a source that has nothing to do with what he is claiming in the article.
What you have to understand is that having an immich instance on the internet is only a security vulnerability if immich itself has a vulnerability in it. Obviously, this is a big if, so if you want to protect against this scenario, you need to make sure only you can access this instance, and you have a few options here that don't involve 3rd parties like cloudflare. You can make it listen only on the local network, and then use ssh port tunneling, or you can set up a vpn.
Cloudflare has been spamming the internet with "burglars are burgling in the neighbourhood, do you have burglar alarms" articles, youtube is also full of this.
Reverse proxy is pretty good - you've isolated the machine from direct access so that is something.
I'm in the same boat. I've got a few services exposed from a home service via NGINX with a LetsEncrypt cert. That removes direct network access to your machine.
Ways I would improve my security:
- Adding a WAF (ModSecurity) to NGINX - big time investment!
- Switching from public facing access to Tailscale only (Overlay network, not VPN, so ostensibly faster). Lots of guys on here do this - AFAIK, this is pretty secure.
Reverse proxy vs. Overlay network - the proxy itself can have exploitable vulnerabilities. You should invest some time in seeing how nmap can identify NGINX services, and see if those methods can be locked down. Good debate on it here:
Piggybacking on your request, I would also like feedback. I also run some services on my home computer. The setup I'm currently using is a VPN (Wireguard) redirecting a UDP port from my router to my PC. Although I am a Software Engineer, I don't know much about networks/infra, so I chose what seemed to me the most conservative approach.
Well, you are better off using Google Photos for securely accessing your photos over Internet. It is not a matter of securing it once, but one of keeping it secure all the time.
The exploit is not known in this case. The claim that it was password protected seems like an unverified statement. No pg_hba.conf content provided, this docker must have a very open default config for postgres.
`Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.`
Despite people slating the author, I think this is a reasonable oversight.
On the surface, spinning up a Postgres instance in Docker seems secure because it’s contained. I know many articles claim “Docker= Secure”.
Whilst easy to point to common sense needed, perhaps we need to have better defaults. In this case, the Postgres images should only permit the cli, and nothing else.
I’d recommend using something like Tailscale for these use cases and general access, there’s no need to expose services to the internet much of the time.
There usual route that people would take is either use VPN/tailscale/Clouflare Tunnels ..etc and only expose things locally and you will need to be on VPN network to access services. The other route is not to expose any ports and rely on reverse proxy. Actually you can combine the two approaches and it is relativity easy for non SWE homelab hobbyists.
I use HAProxy on PFSense to expose a home media server (among other services) for friends to access. It runs on a privileged LXC (because NFS) but as an unprivileged user.
Is this reckless? Reading through all this makes me wonder if SSHFS (instead of NFS) with limited scope might be necessary.
My self-hosting setup uses Cloudflare tunnels to host the website without opening any ports. And Tailscale VPN to directly access the machine. You may want to look at it!
I was exposing my services the same way for a long time, now I only expose web services via cloudflare, with an iptable configuration to reject everything on port 443 not coming from them.
I also use knockd for port knocking to allow the ssh port, just in case I need to log in to my server without having access to one of my devices with Wireguard, but I may drop this since it doesn't seem very useful.
Every server gets constantly probed for SSH. Since half the servers on the Internet haven't been taken over yet, it doesn't seem like SSH has significant exploits (well, there was that one signal handler race condition).
Unless you're trying to do one of those designs that cloud vendors push to fully protect every single traffic flow, most people have some kind of very secure entry point into their private network and that's sufficient to stop any random internet attacks (doesn't stop trojans, phishing, etc). You have something like OpenSSH or Wireguard and then it doesn't matter how insecure the stuff behind that is, because the attacker can't get past it.
[+] [-] smarx007|1 year ago|reply
I usually expose ports like `127.0.0.1:1234:1234` instead of `1234:1234`. As far as I understand, it still punches holes this way but to access the container, an attacker would need to get a packet routed to the host with a spoofed IP SRC set to `127.0.0.1`. All other solutions that are better seem to be much more involved.
[+] [-] bluedino|1 year ago|reply
You can explain this to them, they don't care, you can even demonstrate how you can access their data without permission, and they don't get it.
Their app "works" and that's the end of it.
Ironically enough even cybersecurity doesn't catch them for it, they are too busy harassing other teams about out of date versions of services that are either not vulnerable, or already patched but their scanning tools don't understand that.
[+] [-] veyh|1 year ago|reply
[+] [-] dawnerd|1 year ago|reply
[+] [-] anthropodie|1 year ago|reply
[+] [-] spr-alex|1 year ago|reply
this secondary issue with docker is a bit more subtle, it's that they don't respect the bind address when they do forwarding into the container. the end result is that machines one hop away can forward packets into the docker container.
for a home user the impact could be that the ISP can reach into the container. depending on risk appetite this can be a concern (salt typhoon going after ISPs).
more commonly it might end up exposing more isolated work related systems to related networks one hop away
[+] [-] aaomidi|1 year ago|reply
[+] [-] znpy|1 year ago|reply
At this point docker should be considered legacy technology, podman is the way to go.
[+] [-] plagiarist|1 year ago|reply
Why am I running containers as a user that needs to access the Docker socket anyway?
Also, shoutout to the teams that suggest easy setup running their software in a container by adding the Docker socket into its file system.
[+] [-] rpadovani|1 year ago|reply
This prompts a reflection about, as an industry, we should make a better job in providing solid foundations.
When I check tutorials on how to drill in the wall, there is (almost) no warning about how I could lose a finger doing so. It is expected that I know I should be careful around power tools.
How do we make some information part of the common sense? "Minimize the surface of exposure on the Internet" should be drilled in everyone, but we are clearly not there yet
[+] [-] AlgebraFox|1 year ago|reply
But I sympathize with OP. He is not a developer and it is sad that whatever software engineers produce is vulnerable to script kiddies. Exposing database or any server with a good password should not be exploitable in any way. C and C++ has been failing us for decades yet we continue to use such unsafe stacks.
[+] [-] mattrighetti|1 year ago|reply
I'm not sure — what do C and C++ have to do with this?
[+] [-] krater23|1 year ago|reply
[+] [-] WaxProlix|1 year ago|reply
[+] [-] bennythomsson|1 year ago|reply
Realize why Windows still dominates Linux on the average PC desktop? This is why.
[+] [-] smpretzer|1 year ago|reply
[+] [-] luismedel|1 year ago|reply
You could write a similar rant about any development stack and all your rants would be 100% unrelated with your point: never expose a home-hosted service to the internet unless you seriously know your shit.
[+] [-] rane|1 year ago|reply
[+] [-] yobid20|1 year ago|reply
[+] [-] Shank|1 year ago|reply
I even get a weird feeling these days with SSH listening on a public interface. A database server, even with a good password/ACLs, just isn’t a great safe idea unless you can truly keep on top of all security patches.
[+] [-] matharmin|1 year ago|reply
Should the recommendation rather be "don't expose anything from your home network publically unless it's properly secured"?
[+] [-] phoronixrly|1 year ago|reply
> This was somewhat releiving, as the latest change I made was spinning up a postgres_alpine container in Docker right before the holidays. Spinning it up was done in a hurry, as I wanted to have it available remotely for a personal project while I was away from home. This also meant that it was exposed to the internet, with open ports in the router firewall and everything. Considering the process had been running for 8 days, this means that the infection occured just a day after creating the database. None of the database guides I followed had warned me about the dangers of exposing a docker containerized database to the internet. Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.
Seems like they opened up a postgres container to the Internet (IIRC docker does this whether you want to or not, it punches holes in iptables without asking you). Possibly misconfigured authentication or left a default postgres password?
[+] [-] tommy_axle|1 year ago|reply
[+] [-] acidburnNSA|1 year ago|reply
[+] [-] 6ak74rfy|1 year ago|reply
Plus, you can obfuscate that too by using a random port for Wireguard (instead of the default 51820): if Wireguard isn't able to authenticate (or pre-authenticate?) a client, it'll act as if the port is closed. So, a malicious actor/bot wouldn't even know you have a port open that it can exploit.
[+] [-] rsolva|1 year ago|reply
- Missing feature; do not connect when on certain SSIDs. - Bug: When the WG connection is active and I put my phone in Flightmode (which I do every night), it drains the battery from full to almost empty during the night.
[+] [-] Mister_Snuggles|1 year ago|reply
The only thing missing on the client is Split DNS. With my IPSec/IKEv2 setup, I used a configuration profile created with Apple Configurator plus some manual modifications to make DNS requests for my internal stuff go through the tunnel and DNS requests for everything else go to the normal DNS server.
My compromise for WireGuard is that all DNS does to my home network but only packets destined for my home subnets go through the tunnel.
[+] [-] gobblegobble2|1 year ago|reply
OP can't prove that. The only way is to scrap the server completely and start with a fresh OS image. If OP has no backup and ansible repo (or anything similar) to configure a new home server quickly, then I guess another valuable lesson was learned here.
[+] [-] diggan|1 year ago|reply
[+] [-] phartenfeller|1 year ago|reply
My guess is that the attacker figured out or used the default password for the superuser. A quick lookup reveals that a pg superuser can create extension and run some system commands.
I think the takeaway here is that the pg image should autogenerate a strong password or not start unless the user defines a strong one. Currently it just runs with "postgres" as the default username and password.
[+] [-] diggan|1 year ago|reply
Takeaway for beginner application hosters (aka "webmasters") is to never expose something on the open internet unless you're 100% sure you absolutely have to. Everything should default to using a private network, and if you need to accept external connections, do so via some bastion host that isn't actually hosted on your network, which reaches into your private network via proper connections.
[+] [-] j_bum|1 year ago|reply
I am running Immich on my home server and want to be able to access it remotely.
I’ve seen the options of using wireguard or using a reverse proxy (nginx) with Cloudflare CDN, on top of properly configured router firewalls, while also blocking most other countries. Lots of this understanding comes from a YouTube guide I watched [0].
From what I understand, people say reverse proxy/Cloudflare is faster for my use case, and if everything is configured correctly (which it seems like OP totally missed the mark on here), the threat of breaches into to my server should be minimal.
Am I misunderstanding the “minimal” nature of the risk when exposing the server via a reverse proxy/CDN? Should I just host a VPN instead even if it’s slower?
Obviously I don’t know much about this topic. So any help or pointing to resources would be greatly appreciated.
[0] https://youtu.be/Cs8yOmTJNYQ?si=Mwv8YlEf934Y3ZQk
[+] [-] dns_snek|1 year ago|reply
If you can, then you can just forward the port to your Immich instance, or put it behind a reverse proxy that performs some sort of authentication (password, certificate) before forwarding traffic to Immich. Alternatively you could host your own Wireguard VPN and just expose that to the internet - this would be my preferred option out of all of these.
If you can't forward ports, then the easiest solution will probably be a VPN like Tailscale that will try to punch holes in NAT (to establish a fast direct connection, might not work) or fall back to communicating via a relay server (slow). Alternatively you could set up your own proxy server/VPN on some cheap VPS but that can quickly get more complex than you want it to be.
[+] [-] 63stack|1 year ago|reply
What you have to understand is that having an immich instance on the internet is only a security vulnerability if immich itself has a vulnerability in it. Obviously, this is a big if, so if you want to protect against this scenario, you need to make sure only you can access this instance, and you have a few options here that don't involve 3rd parties like cloudflare. You can make it listen only on the local network, and then use ssh port tunneling, or you can set up a vpn.
Cloudflare has been spamming the internet with "burglars are burgling in the neighbourhood, do you have burglar alarms" articles, youtube is also full of this.
[+] [-] RajT88|1 year ago|reply
I'm in the same boat. I've got a few services exposed from a home service via NGINX with a LetsEncrypt cert. That removes direct network access to your machine.
Ways I would improve my security:
- Adding a WAF (ModSecurity) to NGINX - big time investment!
- Switching from public facing access to Tailscale only (Overlay network, not VPN, so ostensibly faster). Lots of guys on here do this - AFAIK, this is pretty secure.
Reverse proxy vs. Overlay network - the proxy itself can have exploitable vulnerabilities. You should invest some time in seeing how nmap can identify NGINX services, and see if those methods can be locked down. Good debate on it here:
https://security.stackexchange.com/questions/252480/blocking...
[+] [-] depaulagu|1 year ago|reply
[+] [-] vinay_ys|1 year ago|reply
[+] [-] fleetside72|1 year ago|reply
`Ofcourse I password protected it, but seeing as it was meant to be temporary, I didn't dive into securing it properly.`
[+] [-] joshghent|1 year ago|reply
Whilst easy to point to common sense needed, perhaps we need to have better defaults. In this case, the Postgres images should only permit the cli, and nothing else.
[+] [-] f00l82|1 year ago|reply
[+] [-] wrren|1 year ago|reply
[+] [-] elashri|1 year ago|reply
[+] [-] dawnerd|1 year ago|reply
Been using it for years and it’s been solid.
[+] [-] zenoprax|1 year ago|reply
Is this reckless? Reading through all this makes me wonder if SSHFS (instead of NFS) with limited scope might be necessary.
[+] [-] _jplc|1 year ago|reply
This exact thing happened to a friend, but there was no need to exploit obscure memory safety vulnerabilities, the password was “password”.
My friend learnt that day that you can start processes in the machine from Postgres.
[+] [-] Quizzical4230|1 year ago|reply
[+] [-] paulnpace|1 year ago|reply
I learned how to do everything through SSH - it is really an incredible Swiss Army knife of a tool.
[+] [-] LelouBil|1 year ago|reply
I also use knockd for port knocking to allow the ssh port, just in case I need to log in to my server without having access to one of my devices with Wireguard, but I may drop this since it doesn't seem very useful.
[+] [-] ocdtrekkie|1 year ago|reply
Especially with a tool you don't have an enterprise class firewall in front of, security needs to be automatic, not afterthought.
[+] [-] immibis|1 year ago|reply
Unless you're trying to do one of those designs that cloud vendors push to fully protect every single traffic flow, most people have some kind of very secure entry point into their private network and that's sufficient to stop any random internet attacks (doesn't stop trojans, phishing, etc). You have something like OpenSSH or Wireguard and then it doesn't matter how insecure the stuff behind that is, because the attacker can't get past it.