I like docker because it makes it super easy to try out apps that I don’t necessarily know that I want and I can just delete it.
I’m also confused about the claim that there is no config file… everyone I know uses docker compose, that’s really the only right way to use docker, using a single docker command is for testing or something, if you’re actually using the app long term, use docker compose. Also most apps I use do have a specific place you can set for configuration in the docker compose file.
it really does allow easy setup with compose, multiple containers, different versions, etc. I have been setting up linux servers and desktops for decades but docker made it way easier for a lot of things
I still have email server setups I would never dare try to touch with docker, but I know it is possible
like a lot of things it has its uses and it's really good at what it does
in addition, docker compose also support reading env variables / .env files from outside that you can use for configuration inside the docker compose file.
That... was not a very convincing article! It came across as a frustrated op-ed where the author intentionally focused on the negatives rather than steel manning their own argument. Any potential positives were handwaved as out-of-scope.
VSCode devcontainers are awesome. They are my default way of working on any new project.
Being able to blow away a container and start with a fresh, reproducible setup - at any time - saves so many headaches.
It kind of seemed like this might have been the first time the author tried doing anything with Docker containers. And yeah... if you're trying to work with and modify a Docker container the same way you'd work with a VM, you're going to have a bad time.
Agreed. I really tire of flat, authoritative statements with no room for context or clarification. Perhaps that's what's needed to succeed in a VC world, but it's the easiest way to get me to dismiss the author's opinion because they leave no room for discussion.
> Well, if you’re expecting Docker to have a file-system easily accessible, you’re wrong—in fact, that’s “the point.” I can’t use typical commands like updatedb/locate/find to find what I need. I have to run a command with a massive prefix specific to that container. I don’t have tab completion when running Docker container commands, so when I inevitably mistype while searching for the file or attempting to delete it, I have to re-edit a multi-line command.
Yes if they want to edit the running container config, that is exactly what to do. Also, if you are just running a mounted volume for the configs, you don't even have to go that far. You can just edit the mounted volume on the host machine and it will show up immediately in the container.
However, I would think you would want to edit the Dockerfile instead so that you fix it every time you restart the container.
But I think the whole point of this post is that the author has no idea how docker works and is mad about having to learn docker things, so nobody should use them. Never mind the entirety of cloud infrastructure running in containers and doing amazing things. Never mind being able to duplicate state across tons of different servers at the same time. It shows that the author isn't into making infrastructure at scale, and has no idea how incredible docker has been to software development, CI/CD pipelines, and deployment / release infrastructure as a whole.
No you're not missing anything, aside from the small part of containers that are "FROM scratch" and don't have a shell binary, you can do the command you wrote.
The author didn't seem to research how to use Docker before writing this.
Docker has so much overhead (complexity and technical) - I hear people recommend it for simplicity all the time and assume they have out-of-date, insecure setups ... Docker containers require more setup to secure and backup in my experience.
docker is not secure. It has no "real" security boundry, and any malicious actor could have you run a docker image that is just as malware as an executable. Like locks on doors, it just keeps out the honest people. So i say effort spent trying to secure it is wasted.
> backup
if you have data in the docker instance, you have to use volume mounts, and then backup that volume mount. I say it's easier to backup than an installed app, as you cannot be sure that it didnt write somewhere else their data!
Among the other wrong/dumb things in the post, equivocating all containers with "Docker" reveals a lot of ignorance. You can have your nice existing stuff, and still use containers (and reap the benefits) if you just use LXC. You get the advantages of not needing to pollute a host with everything (including potentially incompatible dependencies), incorporating cgroups and namespaces and ease of migration (really easy to bring LXC containers to a new host), while not having to buy wholesale into all of the parts of Docker you don't want.
This is (basically) burger king, you can have it your way, you just have to actually learn some new shit once in a while and not just refuse to ever pick up or learn anything new.
Why would you edit or delete files inside a running container? It’s ephemeral and supposed to be used stateless. You can attach volumes from host system if you need persistence.
At least one use case I've seen for Docker is to replicate the massively microservice oriented system. If your app is deployed across 200 different containers in prod, you're going to be testing it by spinning up the same basic containers with Docker in dev. That means a lot of incremental changes-- trivial stuff like adding transient logging or bypassing default flows-- inside the container as part of the development process.
Then you get into politics: you might need change XYZ for your feature, but you don't own that common image and have to rely on someone else to deploy it, so until then it's manual patches.
Because there are different use cases. About 100% of the people I talk to that use docker, are using it to make a separate set of dependencies available in a possibly-different distribution. For THOSE people, the ephemeral, stateless nature of docker is a huge detriment in usability, and a chroot would be far more appropriate. I see docker users waste countless hours working around its statelessness. All the time. YMMV
Hmmm this reads like the author neither understands Docker or Linux, many of the issues they seem to have is just stuff they don't know the right approach to tackling.
Imagine pairing with a mid/Sr and watching them scroll up 40 commands in the terminal and they are complaining that bash won't let them up-arrow 10 lines at a time. In this case, someone writes 5000 words about how they can't get certbot working with their docker setup. They would benefit a lot from working with someone who knows what they are doing.
Great points in this thread, but I would say another advantage of Docker is that of documentation.
The Dockerfile is a description of a reproducible build, and a docker-compose.yml file documents how running services interact, which ports are exposed, any volumes that are shared, etc.
It’s all too easy for config knowledge to be siloed in people. I got the impression that the Author prefers tinkering with pet servers.
Very poor article. Doesn't even acknowledge the main reason people use containers, which is to reliably setup, manage, and replicate an environment regardless of the machine it’s running on.
And this guy runs a bitcoin payment service? Is this the technical level of the people writing critical payment code in the bitcoin ecosystem? Yikes
The only problem I have with containerization is it’s not optimal. You’re adding all sorts of unnecessary overhead that’s not needed, often times to avoid an underlying problem.
Unfortunately, the real software world doesn’t solve underlying problems, they just want things up and running as soon as possible. So Containerization has proved to be pretty useful.
From a hacker perspective, it’s just boring in the same way I feel about AI. It takes the fun out of crafting software.
The irony here is that the author doesn't know how to use containers, there's nothing specifically Docker there, yet seems to portay a level of Unix knowledge ...
Containerization is so widespread... from almost every single programming book and beginner class to the foundation for most of the Internet running today. As a technology, it's very easy to teach. I suggest keeping an open mind with relevant and ubiquitous technology and at least coming up with a compelling alternative for any of the use wildly popular use cases that have made it commodity tech at this point.
Lack of real moderation and reddit tier opinions like this are why I no longer visit this site on a daily or even regular basis.
Containerization is amazingly great for scientific computing. I don’t ever want to go back to doing the make && make install dance and praying I’ve got my dependency ducks in a row.
The only real feature of Docker is the ability to keep unmaintained software running as the world around it moves forward. Academics could do the same thing by just distributing read only VMs as well.
Very unconvincing. With container images, you can use docker compose or k8s to declare and deploy your entire service architecture. That is… massively useful.
I get the impression that the person who wrote this article doesn't know much about docker. Running 2 apps and a certbot can be done without containers easily. Try running 20 apps, some of which depend on having the same dependency but on a different version.
Regarding security, it depends on how you set up your containers. If you just run them all with default settings - the root user; and give them all the permissions, then yeah, quite insecure. But if you spend an extra minute and create a new non-root user for each container and restrict the permissions then it's quite secure. There are plenty of docker hardening tutorials.
Regarding ease of setup, it took me a while to learn docker. Setting up first containers took very long time. But now, it's quick and easy. I have templates for the docker compose file and I know how to solve all the common issues.
Regarding ease of management, it's very easy. Most of my containers is setup once and forget. Upgrading an app just requires changing the version in the docker-compose file.
> It’s no easier to setup a Docker file than a installation shell script, even one that runs on multiple platforms.
I would be very curious to see this done in a robust way. Bash vs PowerShell. All the various installation managers on those OS systems. Permissions as the programs will be going on the OS itself.
When I tried this, granted is a very junior developer, I did not succeed.
Did it for decades before Docker was a thing. Even for projects with lots of complex dependencies, some tied into the OS. Basically you just need a well documented, thorough and up to date set of steps for setting up the environment.
This is something you should still have even if using Docker.
I don't knock Docker for making that easy and "tied up with a bow and ribbon" for its users (that's great!), but do agree there are times when you really don't need the extra abstraction layer.
For the most part people decide to create Docker containers like they're deploying everything to heavy prod so they chop the image down super narrow. Just solve for your own use-case. I run my blog and other things on a homeserver with a Cloudflare reverse proxy in front of it and I don't use `docker` strictly but I do use systemd quadlets and podman and it's the same thing.
If you're upset with your tooling in your Docker image, just make it so you can become root in it, make sure it has debug tooling, and so on and so forth. Nothing stops you from running `updatedb` and `locate` in it. It's just an overlayfs for the fs nothing fancy.
I understand somewhat the urge for this. There is some containerization overhead and at a small prop trading shop we wouldn't do it (apart from the annoyance of plumbing onload etc. I never figured out how to control scheduling properly) but for most things containers are a godsend.
How about "this has too many dependencies which are tricky to set up and I think they might change under me and I won't be able to run the project anymore"?
I've created a (working) docker image and even if the stuff in the dockerfile breaks, I still have the image and can run the damn thing.
> There are basically only two “real” reasons to use Docker or containerization more generally:
> 1. People who do not know how to use Unix-based operating systems or specifically GNU/Linux.
> 2. People who are deploying a program for a corporation at a massive enterprise scale, don’t care about customizability and need some kind of guarantor of homogeneity.
Unix is only around because of its use at massive enterprise scale. Very few people were using Unix instead of DOS (or Mac OS or Windows or whatever) for their home PCs; it only got popular and people learned how to use it and later Linux because of its use in business. Nowadays, Docker is the standard packaging system at massive enterprise scale. As such, you should learn to use it
> Very few people were using Unix instead of DOS (or Mac OS or Windows or whatever) for their home PCs; it only got popular and people learned how to use it and later Linux because of its use in business
I would say this part is correct.
Your first statement is incorrect, as phrased, but I understand what you meant. Granted, you would have to wipe out all cloud providers using flavors of unix, most phones and macs to reduce the footprint. That being said, it's unpopular as a desktop OS. Phones and Macs hide it so well that most people are unaware of the underlying OS.
My first Linux machine was on my work desk in 1998, while we were running racks of UltraSPARCs in production.
I use docker extensively for local development in all my projects at home and at work. This guy is wrong about multiple things, eg "Well, if you’re expecting Docker to have a file-system easily accessible, you’re wrong"
I can access my docker OS from: docker exec -it containername bash (allowing that it has bash).
If the container OS has autocomplete and other GNU tools and features, you can get all the functionality. If you want to build that image or even upgrade the image you have (most containers have access to package management), you have a new image you can use the way you like...which might include running more than one service on the same container. Just like using a script on another unix machine, except without having to set up the physical networking or paying a host.
It's very UNIX-y to provide single entry points to services and run them in relative isolation (changes to one container do not affect the others) by default.
macOS (since version 10) is Unix. You can say most macOS users are not using the terminal or that back in the 1990s and 1980s, all the popular desktop OSes weren't based on Unix and that would probably be more accurate.
The massive enterprise scale part is more complicated.
First of all, we need to clarify that the "people who should be know how to use Unix" here are developers and system administrators. Most people don't need to know Unix and that's fun. You sometimes see people (I get the feeling the OP might be lowkey one of them) mourning the fact that that everyone should be running Linux and doing everything through the terminal. This is like saying everyone should be driving manual transmission, baking their own bread, growing vegetables in their back yard, building their own computer from parts, sewing their own clothes... you get the story. All of these things could be cool and rewarding, but we lack the time and resources to become proficient at everything. GUI is good for most people.
Now the deal with developers using Unix is a much more complex story. Back in the 1970s Unix wasn't very enterprise-y at all, but gained traction in universities and research labs and started spreading to the business world. Even well into the 1980s, the "real" enterprise was IBM mainframes, with Unix still being somewhat of a rebel, but it was clearly the dominant OS for minicomputers, which were later replaced by (microcomputer-sized but far more expensive) servers and workstations. There were other competitors, such as Lisp Machines and BeOS, but nothing ever came close to taking over Unix.
Back in the 1980s, people were not using Unix on their home computers, because their home computers were _just not powerful enough_ to run Unix. Developers that had the money to spare, certainly did prefer an expensive Unix workstation. So it makes large (for that time) microcomputer software vendors often used Unix workstation to develop the software that was later run on cheaper microcomputer OSes. Microsoft has famously been using their own version of Unix (Xenix) during the 1980s as their main development platform.
This shows the enterprise made a great contribution for popularizing Unix. Back in the 1980s and 1990s there were a few disgruntled users[1] who saw the competition dying before their eyes and had to switch the dominant Unix monoculture (if by "monoculture" you mean a nation going through a 100-sided, 20-front post-apocalyptic civil war). But nobody complained about having to ditch DOS and use an expensive Unix workstation, except, perhaps, for the fact their choice of games to play got a lot slimmer.
This is all great and nice, but back in the 1990s most of the enterprise development moved back to Windows. Or maybe it's more precise to say, the industry grew larger and new developers were using Windows (with the occasional windows command prompt), since it was cheap and good enough. Windows was very much entrenched in the enterprise, as was Unix, but their spheres of market dominance was different. There were two major battlegrounds where Windows was gaining traction (medium sized servers and workstations). Eventually windows has almost entirely lost the servers but decisively won the workstations (only to lose half of them again to Apple later on). The interest part is that Windows was slowly winning over the Enterprise version of Unix, but eventually lost to the open-source Linux.
Looking at this, I think the explanation that Unix won over DOS/Windows CMD/PowerShell (or Mac OS 9 if we want to be criminally anachronistic) is waaaay too simplistic. Sure, Unix's enterprise dominance killed Lisp Machines and didn't leave any breathing space for BeOS, but that's not the claim. DOS was never a real competitor to Unix, and when it comes to newer versions of Windows, they were probably the dominant development platform for a while.
I think Unix won over pure Windows-based flows (whether with GUI or supplemented by windows command-line and even PowerShell) because of these things:
1. It was the dominant OS (except for a short period where Windows servers managed to dominate a sizable chunk of the market) , so you needed to know Unix if you wrote server side code, and it was useful to run Unix locally.
2. Unix tools were generally more reliable. Back in the 1990s and 2000s, Windows did have some powerful GUI tools, but GUI tools suffer when it came to reproducibility, knowledge transfer and productivity. It's a bit counterintuitive, but it's quite obvious if you think about it: having to locate some feature in a deeply nested menu or settings dialog and turn it on, is more complex than just adding a command line flag or setting an environment variable.
3. Unix tools are more composable. The story of small tools doing-one-thing-well and piping output is well known, but it's not just that. For instance, compare Apache httpd which had a textual config file format to IIS on Windows which had proprietary configuration database which often got corrupted. This meant that third-party tool integration, version control, automation and configuration review were all simpler on Apache httpd. This is just one example, but it applies to the vast majority of Windows tools back then. Windows tools were islands built on shaky foundations, while Unix tools were reliable mountain fortresses. They were often rough around the edges, but they turned out to be better suited for the job.
4. Unix was always dominant in teaching computer science. Almost all universities taught Unix classes and very few universities taught Windows. The students were often writing their code on Windows and later uploading their code to a Unix server to compile (and dealing with all these pesky line endings that were all wrong). But they did have to familiarize themselves with Unix.
I think all of these factors (and probably a couple of others) brought in the popularization and standardization of Unix tools as the basis for software development in the late 2000s and early 2010s.
The limitations and gotchas of Docker containers are now well-known.
- Doesn't have the same resource isolation guarantees of quality type 1 hypervisors.
- Makes installation of cross-cutting concerns (monitoring and security agents) more difficult.
- Hand-waves away system administration.
- Challenges in assuring proper supply chain integrity.
But what it does is make infrastructure more accessible, repeatable, and standardized more simply than what came before, for better or worse. That's a giant dev UX win.
Them's called tradeoffs and choosing the right tool(s) in the toolbox for the particular purpose. Many of the concerns above can be mitigated with extra attention to detail.
I have been using Unix since 1983, and Linux since version <1.0, and I would respectfully suggest that the author has missed the point of Docker.
However, if you want to use a shell script for setup instead of a Dockerfile, and don’t mind terminating and recreating VMs when you change anything, and your DNS is set up well, then yeah, that can work almost as well. I do that, sometimes.
I don't like Docker, but I do like the idea of containers, and have since I moved from Solaris 9 to Solaris 10. These days I frequently use podman and sometimes use lxc. Ignoring the development workstation benefits, I like using containers to bundle (almost) all the configuration elements together, limit resources that one service can take (I know you can do that with cgroups as well), and isolate configurations from each other.
Containers correctly used make things much easier.
“I need to build this software stack for Debian 10 on Arm64 but I am running arch on x86” -> Docker container with Debian cross compilation toolchain and all is good. “But I need a modern compiler”, install it in the container, problem solved and you know the system depa match.
“This software is only validated on Ubuntu 24.04”, container.
Everyone has already mentioned have a dev environment that exactly matches prod save hardware, containers.
I love docker. It's indispensable. But it's bitterly hilarious that we are in a place where that's true. I hope in 20 years we have figured out the problems docker solves without another "wrap the whole thing in an abstraction layer" solution. But I had hoped we'd be there by 2010.
Docker gives me at least some form of isolation. Yes, I know container escapes are possible but I run gvisor on top of it which is a strong sandbox. If I was just running as a systemd service as a user, all the attacker needs is a linux LPE, which is in abundance
> There are basically only two “real” reasons to use Docker or containerization more generally:
> 1. People who do not know how to use Unix-based operating systems or specifically GNU/Linux.
> 2. People who are deploying a program for a corporation at a massive enterprise scale, don’t care about customizability and need some kind of guarantor of homogeneity.
The key evidence for this claim being wrong is looking at where containerization was first developed. At least as far as I know, the first OS to introduce containers was FreeBSD with its jails mechanism in 1999. FreeBSD is a Unix-based operating system, that is quite decidedly non-enterprise.
Containers are categorically not meant for "Windows developers who don't know Unix". You still need to understand Unix in order to run containers efficiently, perhaps even more so. They may produce a lower barrier of entry to get something to kinda-sorta-work than the classic "wget https://foo.bar/foo.tar.gz && tar xvzf foo.targz && cd foo && ./configure && make && make install", but that doesn't mean the technology is bad.
I think the OP is confusing several issues like containers overuse (which does happens sometimes), certain tools being more complex than they need to (-ahem- certbot), lack of experience in configuring and orchestrating containers, and the fact that inspecting and debugging containers requires an additional set of tools or techniques.
I agree with one thing: you shouldn't be using containers for everything. If you install all your tools as containers, performance will suffer and interoperability will become harder. On the other hand, when I'm running a server, even my own home server, containers are a blessing. I used to run servers without containers before, and I - for one - do not miss this experience in the slightest.
This has rather strong “old man yelling at clouds” vibes.
OP: Learn docker and it stops being an “impenetrable wall.” Face it, you don’t want to use docker (or podman) because you are set in your ways. That’s fine, but it is not an argument for anyone else.
When I encounter a README/INSTALL that advises Docker, I start to suspect that the package is a mess. I'm sure there are legitimate usages within enterprise-y scenarios, but it has commonly become a way to paper over other issues.
As an expert at yelling at clouds, I can say I love Docker and couldn't/wouldn't do development without it. Besides, most of my cloud stuff nowadays is targeting Cloud Run, so why shouldn't I use it to build?
Now I say all that, is there another solution I should be looking for doing similar things? Maybe this old man has missed something easier to use.
I do hate how Docker chews up my drive with images though...
Guy who has only heard internet catchphrase arguments encountering an unpleasant idea: "Getting a lot of 'old man yelling at clouds' vibes from this..."
The author clearly doesn't know how to use Docker and blames their issues on the tool and even on the concept of containerization !
Regarding a file system, in most docker containers you should be able to run "docker exec -ti <if> sh" and you have a shell inside the container, where you *have autocomplete*,and can *run linux commands like locate*.
Regarding configuration files, that's an application issue, 99% of applications I run with docker use configuration files, because that just how you manage software. So either your BTCPay thing doesn't have a configuration file, and it would be the same than if you didn't use Docker, or it has one and you didn't know you could mount it inside the container.
And regarding the "fake" reasons :
> It’s no easier to setup a Docker file than a installation shell script, even one that runs on multiple platforms.
Um, no ? Because between "knowing the environment my code runs in" and "not knowing the environment my code runs in" of course the first option is better and easier to reason about.
> Containers can only be “easier to manage” when they strip away all of the user’s ability to manage in the normal unix-way,
and that is relatively unmissed.
By unix way what do you mean ? The container is a process, you can manage the process the unix way.
The focus is in the process' environment, which is better if the end user *doesn't* have to manage it.
> Containerization makes software an opaque box where you are ultimately at the mercy of what graphical settings menus have been programed into the software. It is the nature of containers that bugs can never been fixed by users, only the official development team.
I think you just don't know how to use Docker to edit the files of your application, but it's really as easy as just editing files on linux because *the container is really just using a linux filesystem*
> People who do not know how to use Unix-based operating systems or specifically GNU/Linux.
Did you miss the fact that you need to know how to use linux to write a working Dockerfile ? Because it still runs linux !
Author could have boiled this article down to his conclusion at the end and saved everyone alot of time:
"Ergo, I don’t use Docker and containerization, I’m annoyed by them and I don’t do tutorials on them. They are not for me or for people who want to do basic personal sysadmining. I think enterprise sysadmins would definitely do better doing more for their personal life outside of things like Docker, but again, there are reasons people use these things for many professional use-cases."
Yeah, what I don't understand is that he seems to completely ignore the fact that a docker containers is still running linux, just isolated in a new filesystem (and more technologies I don't know a lot about like namespaces and stuff)
So the author thinks it's better for users to do (sometimes tedious) steps to get an application or a set of applications running, just for them to "know how to use linux", while ignoring the fact that Docker/containerization's primarily use case is for the developer side, and the developer needs to know linux to write a working Dockerfile.
Kinda. Yeah. Ignorance really isn't a virtue, and at some point bending over backwards to support people that don't want to learn things is counterproductive.
dontTREATonme|7 months ago
I’m also confused about the claim that there is no config file… everyone I know uses docker compose, that’s really the only right way to use docker, using a single docker command is for testing or something, if you’re actually using the app long term, use docker compose. Also most apps I use do have a specific place you can set for configuration in the docker compose file.
kaptainscarlet|7 months ago
aspbee555|7 months ago
I still have email server setups I would never dare try to touch with docker, but I know it is possible
like a lot of things it has its uses and it's really good at what it does
gryn|7 months ago
SeanAnderson|7 months ago
VSCode devcontainers are awesome. They are my default way of working on any new project.
Being able to blow away a container and start with a fresh, reproducible setup - at any time - saves so many headaches.
neuronexmachina|7 months ago
slashdave|7 months ago
MadnessASAP|7 months ago
The Author
Freedom2|7 months ago
kmoser|7 months ago
Am I missing something? Isn't this as easy as:
docker exec -it --user <username> <container> /bin/bash
I used this just yesterday to get shell access to a Docker container. From there I have full access to the filesystem.
blackjack_|7 months ago
However, I would think you would want to edit the Dockerfile instead so that you fix it every time you restart the container.
But I think the whole point of this post is that the author has no idea how docker works and is mad about having to learn docker things, so nobody should use them. Never mind the entirety of cloud infrastructure running in containers and doing amazing things. Never mind being able to duplicate state across tons of different servers at the same time. It shows that the author isn't into making infrastructure at scale, and has no idea how incredible docker has been to software development, CI/CD pipelines, and deployment / release infrastructure as a whole.
LelouBil|7 months ago
The author didn't seem to research how to use Docker before writing this.
leakycap|7 months ago
chii|7 months ago
docker is not secure. It has no "real" security boundry, and any malicious actor could have you run a docker image that is just as malware as an executable. Like locks on doors, it just keeps out the honest people. So i say effort spent trying to secure it is wasted.
> backup
if you have data in the docker instance, you have to use volume mounts, and then backup that volume mount. I say it's easier to backup than an installed app, as you cannot be sure that it didnt write somewhere else their data!
rpcope1|7 months ago
This is (basically) burger king, you can have it your way, you just have to actually learn some new shit once in a while and not just refuse to ever pick up or learn anything new.
kousthub|7 months ago
hakfoo|7 months ago
Then you get into politics: you might need change XYZ for your feature, but you don't own that common image and have to rely on someone else to deploy it, so until then it's manual patches.
dima55|7 months ago
khaki54|7 months ago
Imagine pairing with a mid/Sr and watching them scroll up 40 commands in the terminal and they are complaining that bash won't let them up-arrow 10 lines at a time. In this case, someone writes 5000 words about how they can't get certbot working with their docker setup. They would benefit a lot from working with someone who knows what they are doing.
hn_throw2025|7 months ago
The Dockerfile is a description of a reproducible build, and a docker-compose.yml file documents how running services interact, which ports are exposed, any volumes that are shared, etc.
It’s all too easy for config knowledge to be siloed in people. I got the impression that the Author prefers tinkering with pet servers.
rascul|7 months ago
It's not inherently reproducible but it can potentially be made so.
amrocha|7 months ago
And this guy runs a bitcoin payment service? Is this the technical level of the people writing critical payment code in the bitcoin ecosystem? Yikes
hadlock|7 months ago
pipeline_peak|7 months ago
Unfortunately, the real software world doesn’t solve underlying problems, they just want things up and running as soon as possible. So Containerization has proved to be pretty useful.
From a hacker perspective, it’s just boring in the same way I feel about AI. It takes the fun out of crafting software.
stephenbez|7 months ago
Though there are many reasons why you wouldn’t want to go in and delete a file since that won’t persist or be reproducible.
rawkode|7 months ago
That's 5 minutes of my life in not getting back.
routelastresort|7 months ago
Lack of real moderation and reddit tier opinions like this are why I no longer visit this site on a daily or even regular basis.
cmdrk|7 months ago
mike_d|7 months ago
maxk42|7 months ago
periodjet|7 months ago
mootoday|7 months ago
IMHO, that's the way to go. Instead of hundreds of megabytes or even gigabytes, we're talking kilobytes, sometimes megabytes for each unit of compute.
Actually sandboxed environments per component.
Add/remove permissions of who can execute what at runtime.
But then again, I'm biased towards modern tech and while it often turns out I was right on the money, it's not always the case.
Do your own research and such, but FYI about wasmCloud.
zerof1l|7 months ago
Regarding security, it depends on how you set up your containers. If you just run them all with default settings - the root user; and give them all the permissions, then yeah, quite insecure. But if you spend an extra minute and create a new non-root user for each container and restrict the permissions then it's quite secure. There are plenty of docker hardening tutorials.
Regarding ease of setup, it took me a while to learn docker. Setting up first containers took very long time. But now, it's quick and easy. I have templates for the docker compose file and I know how to solve all the common issues.
Regarding ease of management, it's very easy. Most of my containers is setup once and forget. Upgrading an app just requires changing the version in the docker-compose file.
MattGaiser|7 months ago
I would be very curious to see this done in a robust way. Bash vs PowerShell. All the various installation managers on those OS systems. Permissions as the programs will be going on the OS itself.
When I tried this, granted is a very junior developer, I did not succeed.
rkagerer|7 months ago
This is something you should still have even if using Docker.
I don't knock Docker for making that easy and "tied up with a bow and ribbon" for its users (that's great!), but do agree there are times when you really don't need the extra abstraction layer.
arjie|7 months ago
If you're upset with your tooling in your Docker image, just make it so you can become root in it, make sure it has debug tooling, and so on and so forth. Nothing stops you from running `updatedb` and `locate` in it. It's just an overlayfs for the fs nothing fancy.
I understand somewhat the urge for this. There is some containerization overhead and at a small prop trading shop we wouldn't do it (apart from the annoyance of plumbing onload etc. I never figured out how to control scheduling properly) but for most things containers are a godsend.
udev4096|7 months ago
tasuki|7 months ago
I've created a (working) docker image and even if the stuff in the dockerfile breaks, I still have the image and can run the damn thing.
ranger207|7 months ago
> 1. People who do not know how to use Unix-based operating systems or specifically GNU/Linux.
> 2. People who are deploying a program for a corporation at a massive enterprise scale, don’t care about customizability and need some kind of guarantor of homogeneity.
Unix is only around because of its use at massive enterprise scale. Very few people were using Unix instead of DOS (or Mac OS or Windows or whatever) for their home PCs; it only got popular and people learned how to use it and later Linux because of its use in business. Nowadays, Docker is the standard packaging system at massive enterprise scale. As such, you should learn to use it
Supermancho|7 months ago
I would say this part is correct.
Your first statement is incorrect, as phrased, but I understand what you meant. Granted, you would have to wipe out all cloud providers using flavors of unix, most phones and macs to reduce the footprint. That being said, it's unpopular as a desktop OS. Phones and Macs hide it so well that most people are unaware of the underlying OS.
My first Linux machine was on my work desk in 1998, while we were running racks of UltraSPARCs in production.
I use docker extensively for local development in all my projects at home and at work. This guy is wrong about multiple things, eg "Well, if you’re expecting Docker to have a file-system easily accessible, you’re wrong"
I can access my docker OS from: docker exec -it containername bash (allowing that it has bash).
If the container OS has autocomplete and other GNU tools and features, you can get all the functionality. If you want to build that image or even upgrade the image you have (most containers have access to package management), you have a new image you can use the way you like...which might include running more than one service on the same container. Just like using a script on another unix machine, except without having to set up the physical networking or paying a host.
It's very UNIX-y to provide single entry points to services and run them in relative isolation (changes to one container do not affect the others) by default.
unscaled|7 months ago
The massive enterprise scale part is more complicated.
First of all, we need to clarify that the "people who should be know how to use Unix" here are developers and system administrators. Most people don't need to know Unix and that's fun. You sometimes see people (I get the feeling the OP might be lowkey one of them) mourning the fact that that everyone should be running Linux and doing everything through the terminal. This is like saying everyone should be driving manual transmission, baking their own bread, growing vegetables in their back yard, building their own computer from parts, sewing their own clothes... you get the story. All of these things could be cool and rewarding, but we lack the time and resources to become proficient at everything. GUI is good for most people.
Now the deal with developers using Unix is a much more complex story. Back in the 1970s Unix wasn't very enterprise-y at all, but gained traction in universities and research labs and started spreading to the business world. Even well into the 1980s, the "real" enterprise was IBM mainframes, with Unix still being somewhat of a rebel, but it was clearly the dominant OS for minicomputers, which were later replaced by (microcomputer-sized but far more expensive) servers and workstations. There were other competitors, such as Lisp Machines and BeOS, but nothing ever came close to taking over Unix.
Back in the 1980s, people were not using Unix on their home computers, because their home computers were _just not powerful enough_ to run Unix. Developers that had the money to spare, certainly did prefer an expensive Unix workstation. So it makes large (for that time) microcomputer software vendors often used Unix workstation to develop the software that was later run on cheaper microcomputer OSes. Microsoft has famously been using their own version of Unix (Xenix) during the 1980s as their main development platform.
This shows the enterprise made a great contribution for popularizing Unix. Back in the 1980s and 1990s there were a few disgruntled users[1] who saw the competition dying before their eyes and had to switch the dominant Unix monoculture (if by "monoculture" you mean a nation going through a 100-sided, 20-front post-apocalyptic civil war). But nobody complained about having to ditch DOS and use an expensive Unix workstation, except, perhaps, for the fact their choice of games to play got a lot slimmer.
This is all great and nice, but back in the 1990s most of the enterprise development moved back to Windows. Or maybe it's more precise to say, the industry grew larger and new developers were using Windows (with the occasional windows command prompt), since it was cheap and good enough. Windows was very much entrenched in the enterprise, as was Unix, but their spheres of market dominance was different. There were two major battlegrounds where Windows was gaining traction (medium sized servers and workstations). Eventually windows has almost entirely lost the servers but decisively won the workstations (only to lose half of them again to Apple later on). The interest part is that Windows was slowly winning over the Enterprise version of Unix, but eventually lost to the open-source Linux.
Looking at this, I think the explanation that Unix won over DOS/Windows CMD/PowerShell (or Mac OS 9 if we want to be criminally anachronistic) is waaaay too simplistic. Sure, Unix's enterprise dominance killed Lisp Machines and didn't leave any breathing space for BeOS, but that's not the claim. DOS was never a real competitor to Unix, and when it comes to newer versions of Windows, they were probably the dominant development platform for a while.
I think Unix won over pure Windows-based flows (whether with GUI or supplemented by windows command-line and even PowerShell) because of these things:
1. It was the dominant OS (except for a short period where Windows servers managed to dominate a sizable chunk of the market) , so you needed to know Unix if you wrote server side code, and it was useful to run Unix locally.
2. Unix tools were generally more reliable. Back in the 1990s and 2000s, Windows did have some powerful GUI tools, but GUI tools suffer when it came to reproducibility, knowledge transfer and productivity. It's a bit counterintuitive, but it's quite obvious if you think about it: having to locate some feature in a deeply nested menu or settings dialog and turn it on, is more complex than just adding a command line flag or setting an environment variable.
3. Unix tools are more composable. The story of small tools doing-one-thing-well and piping output is well known, but it's not just that. For instance, compare Apache httpd which had a textual config file format to IIS on Windows which had proprietary configuration database which often got corrupted. This meant that third-party tool integration, version control, automation and configuration review were all simpler on Apache httpd. This is just one example, but it applies to the vast majority of Windows tools back then. Windows tools were islands built on shaky foundations, while Unix tools were reliable mountain fortresses. They were often rough around the edges, but they turned out to be better suited for the job.
4. Unix was always dominant in teaching computer science. Almost all universities taught Unix classes and very few universities taught Windows. The students were often writing their code on Windows and later uploading their code to a Unix server to compile (and dealing with all these pesky line endings that were all wrong). But they did have to familiarize themselves with Unix.
I think all of these factors (and probably a couple of others) brought in the popularization and standardization of Unix tools as the basis for software development in the late 2000s and early 2010s.
[1] See the UNIX-Hater's Handbook: https://web.mit.edu/~simsong/www/ugh.pdf
burnt-resistor|7 months ago
- Doesn't have the same resource isolation guarantees of quality type 1 hypervisors.
- Makes installation of cross-cutting concerns (monitoring and security agents) more difficult.
- Hand-waves away system administration.
- Challenges in assuring proper supply chain integrity.
But what it does is make infrastructure more accessible, repeatable, and standardized more simply than what came before, for better or worse. That's a giant dev UX win.
Them's called tradeoffs and choosing the right tool(s) in the toolbox for the particular purpose. Many of the concerns above can be mitigated with extra attention to detail.
msgodel|7 months ago
rsolva|7 months ago
wrs|7 months ago
However, if you want to use a shell script for setup instead of a Dockerfile, and don’t mind terminating and recreating VMs when you change anything, and your DNS is set up well, then yeah, that can work almost as well. I do that, sometimes.
jdboyd|7 months ago
jpc0|7 months ago
“I need to build this software stack for Debian 10 on Arm64 but I am running arch on x86” -> Docker container with Debian cross compilation toolchain and all is good. “But I need a modern compiler”, install it in the container, problem solved and you know the system depa match.
“This software is only validated on Ubuntu 24.04”, container.
Everyone has already mentioned have a dev environment that exactly matches prod save hardware, containers.
happytoexplain|7 months ago
kylegalbraith|7 months ago
Not that I think Docker should always be used. It’s a simple piece tech on the surface but explodes in complexity the more complicated you try to get.
All that said, this article feels detached from reality.
udev4096|7 months ago
huksley|7 months ago
I agree, for a lot of things we don't need containers, and running apps natively is so easy on modern Linux distributions
queenkjuul|7 months ago
I use it for nearly all of my personal self hosted apps and I'm never going back to doing it any other way.
kesor|7 months ago
grimblee|7 months ago
unscaled|7 months ago
The key evidence for this claim being wrong is looking at where containerization was first developed. At least as far as I know, the first OS to introduce containers was FreeBSD with its jails mechanism in 1999. FreeBSD is a Unix-based operating system, that is quite decidedly non-enterprise.
Containers are categorically not meant for "Windows developers who don't know Unix". You still need to understand Unix in order to run containers efficiently, perhaps even more so. They may produce a lower barrier of entry to get something to kinda-sorta-work than the classic "wget https://foo.bar/foo.tar.gz && tar xvzf foo.targz && cd foo && ./configure && make && make install", but that doesn't mean the technology is bad.
I think the OP is confusing several issues like containers overuse (which does happens sometimes), certain tools being more complex than they need to (-ahem- certbot), lack of experience in configuring and orchestrating containers, and the fact that inspecting and debugging containers requires an additional set of tools or techniques.
I agree with one thing: you shouldn't be using containers for everything. If you install all your tools as containers, performance will suffer and interoperability will become harder. On the other hand, when I'm running a server, even my own home server, containers are a blessing. I used to run servers without containers before, and I - for one - do not miss this experience in the slightest.
comradesmith|7 months ago
If you admit this, then why do you go on to write against docker in such an authoritative tone?
I don’t think you understand docker.
adastra22|7 months ago
OP: Learn docker and it stops being an “impenetrable wall.” Face it, you don’t want to use docker (or podman) because you are set in your ways. That’s fine, but it is not an argument for anyone else.
palmfacehn|7 months ago
kordlessagain|7 months ago
Now I say all that, is there another solution I should be looking for doing similar things? Maybe this old man has missed something easier to use.
I do hate how Docker chews up my drive with images though...
edfletcher_t137|7 months ago
mrbluecoat|7 months ago
add-sub-mul-div|7 months ago
LelouBil|7 months ago
Regarding a file system, in most docker containers you should be able to run "docker exec -ti <if> sh" and you have a shell inside the container, where you *have autocomplete*,and can *run linux commands like locate*.
Regarding configuration files, that's an application issue, 99% of applications I run with docker use configuration files, because that just how you manage software. So either your BTCPay thing doesn't have a configuration file, and it would be the same than if you didn't use Docker, or it has one and you didn't know you could mount it inside the container.
And regarding the "fake" reasons :
> It’s no easier to setup a Docker file than a installation shell script, even one that runs on multiple platforms.
Um, no ? Because between "knowing the environment my code runs in" and "not knowing the environment my code runs in" of course the first option is better and easier to reason about.
> Containers can only be “easier to manage” when they strip away all of the user’s ability to manage in the normal unix-way, and that is relatively unmissed.
By unix way what do you mean ? The container is a process, you can manage the process the unix way.
The focus is in the process' environment, which is better if the end user *doesn't* have to manage it.
> Containerization makes software an opaque box where you are ultimately at the mercy of what graphical settings menus have been programed into the software. It is the nature of containers that bugs can never been fixed by users, only the official development team.
I think you just don't know how to use Docker to edit the files of your application, but it's really as easy as just editing files on linux because *the container is really just using a linux filesystem*
> People who do not know how to use Unix-based operating systems or specifically GNU/Linux.
Did you miss the fact that you need to know how to use linux to write a working Dockerfile ? Because it still runs linux !
pploug|7 months ago
"Ergo, I don’t use Docker and containerization, I’m annoyed by them and I don’t do tutorials on them. They are not for me or for people who want to do basic personal sysadmining. I think enterprise sysadmins would definitely do better doing more for their personal life outside of things like Docker, but again, there are reasons people use these things for many professional use-cases."
kelvinjps10|7 months ago
sanex|7 months ago
bawolff|7 months ago
LelouBil|7 months ago
So the author thinks it's better for users to do (sometimes tedious) steps to get an application or a set of applications running, just for them to "know how to use linux", while ignoring the fact that Docker/containerization's primarily use case is for the developer side, and the developer needs to know linux to write a working Dockerfile.
dima55|7 months ago
s_ting765|7 months ago
kmeisthax|7 months ago
[deleted]