One of the biggest security concerns isn't listed which is to avoid doing 8000:8000 style port publishing because it'll open your service up to the world on most cloud providers unless you explicitly block that port using a cloud-level firewall. If you're hosting on a place like DigitalOcean you could very easily not be using their external cloud firewall.
Even if you use a cloud firewall it's worth avoiding 8000:8000 for the sake of being explicit with your intentions.
The reason you'd want to avoid that is because you'll probably have your services reverse proxied by nginx, in which case only 80/443 need to be published because the internet will be hitting nginx, not your internal service at example.com:8000 or whatever port it's running on.
This topic and many other security gotchas / best practices were in my DockerCon talk from a few months ago at: https://nickjanetakis.com/blog/best-practices-around-product..., it goes over patterns how you can use a more restrictive and secure 127.0.0.1:8000:8000 value in prod but still use 8000:8000 in dev so you can check it on multiple devices on your local network, all with the same docker-compose.yml file.
Thank you for the article "Best Practices Around Production Ready Web Apps with Docker Compose". I've been referring to it since seeing the original HN submission [0]. Something like Kubernetes was overkill for my needs, especially since Docker Compose is already a part of my development workflow.
I would recommend using Rootless Docker over this cheat sheet. It makes half the issues they are trying to work around redundant. It also solves the issue with Docker punching a hole through UFW..
Some of the points in TFA are fixes to be done in the Dockerfile, meaning that any user of the Docker image would benefit, not just those who run rootless Docker.
Every time I think "I should move off of using AWS managed services and into Docker/Kuberneties. Think of the cloud-agnostic services! The ease of spinning up test environments!" I see an article like this. And I'm reminded that I seem to be buying myself out of a lot of pain.
I'm afraid you are wrong and maybe suffering a bit from AWS Stockholm syndrome. Vanilla Kubernetes is far easier to setup and far faster to deploy/update than any AWS solution involving a pig/alb/vpc/sg/ec2/ECS/etc. The only downside is that you'll have to forego AWS's global deployments and redundancy as well as higher-level service offerings such as logging, alarms, tracing, etc. Nevertheless, that downside is countered by the drastic reduction in operational cost resulting from not being price gouged by AWS.
The first reason is that it's all running on Linux anyway, and Linux is (in general) swiss cheese, security-wise. Even with a billion container tweaks, there are still holes that can be exploited from the container to escalate to the host OS.
The second is that attacks don't need to privilege-escalate to the host to cause havoc. If the attacker can read memory, they can get credentials for other network services and exploit them from the container. Or they can drop malware from the container to any users of a service, or upload it to a service. Or they can just scan the network looking for another vulnerable service. Or it could be something like EC2/ECS Metadata Service was left accessible and they can start enumerating your cloud account(s). More than enough for the average attacker.
Just assume that a Docker container is exactly the same as running a regular process on the host OS, and it will be much simpler to identify attack surfaces and mitigate them.
> I don't think it's worth doing all these tweaks.
I feel like i can understand this point of view, since following all of that advice indeed would be cumbersome. However, at the same time you definitely have to consider what it is that you're running on your infrastructure. A small internal system or even an ERP that's not exposed to the internet will probably give you more leeway in regards to being able to ship stuff now, without spending bunches of time locking everything down, especially if you build all of the containers yourself. On the other hand, a large finance application that is publically accessible and needs to weather thousands of attacks daily will probably need a rather different approach.
Overall, however, i'd say that it's good to have lists of tips like these, because figuring out all of it alone would take a whole lot of time. That said, even in the more relaxed environments, it's generally a good idea to consider at least some of them, for example:
> Unless you are very confident with what you are doing, never expose the UNIX socket that Docker is listening to: /var/run/docker.sock
Being an early adopter of Docker, i once made this mistake on a throwaway VPS. It took less than 24 hours for it to be mining crypto. That said, the socket can be a good option for tools like Portainer (which implementations like Podman miss out on), yet it definitely should never be exposed publically.
As always, security isn't a boolean of on/off, but rather is a sliding scale of sorts - figure out the risks that you're likely to be facing and choose the appropriate means to combat them. Of course, it would be better if Docker provided safer defaults, too.
I'd say it depends very much on your threat model. Would I trust a Docker container with a hostile multi-tenant setup without additional controls.... no.
Is the isolation provided by Docker worth nothing, from a security perspective... also no :)
Hardening containers is a good element of an overall security strategy. It needs to be combined with other controls, both preventative controls at things like the network layer, and detective controls to spot when a preventative control fails and allow for rapid mitigation.
> Just yesterday I was thinking hmm, copying the .env seems like a shitty way to store env vars.
It really depends on the context. Docker already supports defining env cars in container images, so it makes no sense to sneak a .env file into a container image. If all you're doing is setting env cars locally to run a container then if those env cars don't include secrets then it's pretty safe. However it would be preferable if those env cars are handled by the container orchestration system. For instance, docker compose files also support specificing env variables, as well as Kubernetes.
It’s widely accepted “best practice” to not run processes in docker containers as root but I think it does a disservice to not explain why. The top benefit in my mind is to be able to mark interpreted code (server side JS/python/ruby) as read only in case someone gains shell access in the container. It’s great to set up another user for execution, but you should also make sure to copy in code and build artifacts as read-only. I’m curious if people can name other direct benefits to a non-root user other than “I don’t trust docker/Linux process isolation”.
If you want to set the filesystem as read only in a Docker container you can do that, without needing to set-up multiple users in the container.
In general that's a good piece of hardening advice, you just need to mount an empty volume for any temp files that are needed by the app.
For not running as root, the main benefit (to my view) is that there have been multiple CVEs in container stacks where the issue fully or partially mitigated if the container was running as non UID-0
CVE-2021-30465 was partially mitigated, and CVE-2020-15257 had a requirement of the container running as UID-0.
So whilst it's not a panacea, in general not running containers as root is a good layer of defence.
This seems to mitigate the specific threat where I gain non-root shell access to the container, but there's a root process running interpreted code, which I can modify.
However, if the container doesn't contain any processes running as root, there doesn't seem to be any benefit (besides defense in depth) to marking the code as read-only.
[+] [-] nickjj|4 years ago|reply
Even if you use a cloud firewall it's worth avoiding 8000:8000 for the sake of being explicit with your intentions.
The reason you'd want to avoid that is because you'll probably have your services reverse proxied by nginx, in which case only 80/443 need to be published because the internet will be hitting nginx, not your internal service at example.com:8000 or whatever port it's running on.
This topic and many other security gotchas / best practices were in my DockerCon talk from a few months ago at: https://nickjanetakis.com/blog/best-practices-around-product..., it goes over patterns how you can use a more restrictive and secure 127.0.0.1:8000:8000 value in prod but still use 8000:8000 in dev so you can check it on multiple devices on your local network, all with the same docker-compose.yml file.
[+] [-] jessejmc|4 years ago|reply
[0] https://news.ycombinator.com/item?id=27359081
edit: wrong link
[+] [-] rbut|4 years ago|reply
https://docs.docker.com/engine/security/rootless/
[+] [-] asymmetric|4 years ago|reply
[+] [-] totetsu|4 years ago|reply
[+] [-] HWR_14|4 years ago|reply
But am I totally wrong?
[+] [-] rualca|4 years ago|reply
[+] [-] ohthehugemanate|4 years ago|reply
managing k8s yourself on bare metal is hard. Managed k8s on any provider is a real value.
[+] [-] unknown|4 years ago|reply
[deleted]
[+] [-] DeepYogurt|4 years ago|reply
[+] [-] sdze|4 years ago|reply
[+] [-] raesene9|4 years ago|reply
[+] [-] encryptluks2|4 years ago|reply
[+] [-] throwaway984393|4 years ago|reply
The first reason is that it's all running on Linux anyway, and Linux is (in general) swiss cheese, security-wise. Even with a billion container tweaks, there are still holes that can be exploited from the container to escalate to the host OS.
The second is that attacks don't need to privilege-escalate to the host to cause havoc. If the attacker can read memory, they can get credentials for other network services and exploit them from the container. Or they can drop malware from the container to any users of a service, or upload it to a service. Or they can just scan the network looking for another vulnerable service. Or it could be something like EC2/ECS Metadata Service was left accessible and they can start enumerating your cloud account(s). More than enough for the average attacker.
Just assume that a Docker container is exactly the same as running a regular process on the host OS, and it will be much simpler to identify attack surfaces and mitigate them.
[+] [-] KronisLV|4 years ago|reply
I feel like i can understand this point of view, since following all of that advice indeed would be cumbersome. However, at the same time you definitely have to consider what it is that you're running on your infrastructure. A small internal system or even an ERP that's not exposed to the internet will probably give you more leeway in regards to being able to ship stuff now, without spending bunches of time locking everything down, especially if you build all of the containers yourself. On the other hand, a large finance application that is publically accessible and needs to weather thousands of attacks daily will probably need a rather different approach.
Overall, however, i'd say that it's good to have lists of tips like these, because figuring out all of it alone would take a whole lot of time. That said, even in the more relaxed environments, it's generally a good idea to consider at least some of them, for example:
> Unless you are very confident with what you are doing, never expose the UNIX socket that Docker is listening to: /var/run/docker.sock
Being an early adopter of Docker, i once made this mistake on a throwaway VPS. It took less than 24 hours for it to be mining crypto. That said, the socket can be a good option for tools like Portainer (which implementations like Podman miss out on), yet it definitely should never be exposed publically.
As always, security isn't a boolean of on/off, but rather is a sliding scale of sorts - figure out the risks that you're likely to be facing and choose the appropriate means to combat them. Of course, it would be better if Docker provided safer defaults, too.
[+] [-] raesene9|4 years ago|reply
Is the isolation provided by Docker worth nothing, from a security perspective... also no :)
Hardening containers is a good element of an overall security strategy. It needs to be combined with other controls, both preventative controls at things like the network layer, and detective controls to spot when a preventative control fails and allow for rapid mitigation.
[+] [-] goforbg|4 years ago|reply
[+] [-] rualca|4 years ago|reply
It really depends on the context. Docker already supports defining env cars in container images, so it makes no sense to sneak a .env file into a container image. If all you're doing is setting env cars locally to run a container then if those env cars don't include secrets then it's pretty safe. However it would be preferable if those env cars are handled by the container orchestration system. For instance, docker compose files also support specificing env variables, as well as Kubernetes.
[+] [-] rzimmerman|4 years ago|reply
[+] [-] raesene9|4 years ago|reply
In general that's a good piece of hardening advice, you just need to mount an empty volume for any temp files that are needed by the app.
For not running as root, the main benefit (to my view) is that there have been multiple CVEs in container stacks where the issue fully or partially mitigated if the container was running as non UID-0
CVE-2021-30465 was partially mitigated, and CVE-2020-15257 had a requirement of the container running as UID-0.
So whilst it's not a panacea, in general not running containers as root is a good layer of defence.
[+] [-] physicles|4 years ago|reply
However, if the container doesn't contain any processes running as root, there doesn't seem to be any benefit (besides defense in depth) to marking the code as read-only.
[+] [-] redis_mlc|4 years ago|reply
[deleted]