(no title)
hello0904 | 1 year ago
You mentioned Python backend, so literally just replicate build script, directly in VPS: "pip install requirements.txt" > python main.py" > nano /etc/systemd/system/myservice.service > systemd start myservice > Tada.
You can scale instances by just throwing those commands in a bash script (build_my_app.sh) = You're new dockerfile...install on any server in xx-xxx seconds.
ghomem|1 year ago
For Python backends I often deploy the code directly with a Puppet resource called VcsRepo which basically places a certain tag of a certain repo on a certain filesystem location. And I also package the systemd scripts for easy start/stop/restart. You can do this with other config management tools, via bash or by hand, depending on how many systems you manage.
What bothers me with your question is Pip :-) But perhaps that is off topic...?
Gud|1 year ago
Will not run on FreeBSD, for example.
RUnconcerned|1 year ago
ghomem|1 year ago
Pipy is an informal source of software that has low security levels and was infested with malware many times over the years. It does not provide security updates: it provides updates that might include security-related changes as well as functional changes. Whenever you update a package from there, there is a chain reaction of dependency updates that insert untested code in your product.
Due to this, I prefer to target an LTS platform (Ubuntu LTS, Debian, RHEL...) and adapt to whatever python environment exists there, enjoying the fact that I can blindly update a package due to security (ex: Django) without worrying that it will be a new version which could break my app. *
Furthermore, with Ubuntu I can get a formal contract with Canonical without changing anything on my setup, and with RHEL it comes built-in with the subscription. Last time I checked Canonical's security team was around 30pax (whereas Pipy recently hired their first security engineer). These things provide supply-chain peace of mind to whoever consumes the software, not only to who maintains it.
I really need to write an article about this.
* exceptions apply, context is king
hello0904|1 year ago
Option 2: use Poetry
How is this different than a Dockerfile that is creating the venv? Just add it to beginning, just like you would on localhost. But that is why I love to code Python in PyCharm, they manage the venv in each project on init.
tcgv|1 year ago
We have a simple cloud infrastructure. Last year, we moved all our legacy apps to a Docker-based deployment (we were already using Docker for newer stuff). Nothing fancy—just basic Dockerfile and docker-compose.yml.
Advantages:
- Easy to manage: we keep a repo of docker-compose.yml files for each environment.
- Simple commands: most of the time, it’s just "docker-compose pull" and "docker-compose up."
- Our CI pipeline builds images after each commit, runs automated tests, and deploys to staging for QA to run manual tests.
- Very stable: we deploy the same images that were tested in staging. Our deployment success rate and production uptime improved significantly after the switch—even though stability wasn’t a big issue before!
- Common knowledge: everyone on our team is familiar with Docker, and it speeds up onboarding for new hires.
bob1029|1 year ago
I have found that going all-in with certain language/framework features, such as self-contained deployments, can allow for really powerful sidestepping of this kind of operational complexity.
If I was still in a situation where I had to ensure the right combination of runtimes & frameworks are installed every time, I might be reaching for Docker too.
marcosdumay|1 year ago
For example, if you have a program that uses wsgi and runs on python 2.7, and another wsgi program that runs on python 3.16, you will absolutely need 2 different web servers to run them.
You can give different ports to both, and install an nginx on port 80 with a reverse proxy. But software tends to come with a lot of assumptions that make ops hard, and they will often not like your custom setup... but they will almost certainly like a normal docker setup.
darby_nine|1 year ago
throwaway894345|1 year ago
I would love to do away with the Docker layer, but first the standard Linux tooling needs to improve a lot.
Sammi|1 year ago
kristiandupont|1 year ago
randomdata|1 year ago
kbolino|1 year ago
hello0904|1 year ago
Oh, and not only build their app, they can take it a step further and setup the entire new vps and app building in one simple script!
authorfly|1 year ago
Forget about "clunky overhead" - the running costs are < 10%. The dockerfile? You don't even need one. You can just pull from the python version you want e.g. Python1.11 and git pull you files from the container to get up and running. You don't need to use container image saving systems, you don't need to save images, or tag anything, you don't need to write set up scripts in the docker file, you can pass the database credentials through the environment option when launching the container.
The problem is after a year or two you get clashes or weird stuff breaking. And modules stopping support of your python version preventing you installing new ones. Case in point, Googles AI module(needed for gemini and lots of their AI API services) only works on 3.10+. What if you started in 2021? Your python - then cutting edge - would not work anymore, it's only 3.5 years later from that release. Yeah you can use loads of curl. Good luck maintaining that for years though.
Numpy 1.19 is calling np.warnings but some other dependence is using Numpy 1.20 which removed .warnings and made it .notices or something
Your cached model routes for transformers changed default directory
You update the dependencies and it seems fine, then on a new machine you try and update them, and bam, wrong python version, you are on 3.9 and remote is 3.10, so it's all breaking.
It's also not simple in the following respect: your requirements.txt file will potentially have dependency clashes (despite running code), might take ages to install on a 4GB VM (especially if you need pytorch because some AI module that makes life 10x easier rather needlessly requires it).
life with docker is worth it. i was scared of it too, but there are three key benefits for the everyman / solodev:
- Literally docker export the running container as a .tar to install it on a new VM. That's one line and guaranteed the exact same VM, no changes. That's what you want, no risks.
- Back up is equally simple; shell script to download regular back ups. Update is simple; shell script to update git repo within the container. You can docker export it to investigate bugs without affecting the production running container, giving you an instant local dev environment as needed.
- When you inevitably need to update python you can just spin up a new VM with the same port mapping on Python 3.14 or whatever and just create an API internally to communicate, the two containers can share resources but run different python versions. How do you handle this with your solution in 4 years time?
- If you need to rapidly scale, your shell script could work fine, I'll give you that. But probably it takes 2 minutes to start on each VM. Do you want a 2 minute wait for your autoscaling? No you want a docker image / AMI that takes 5 seconds for AWS to scale up if you "hit it big".
ffsm8|1 year ago
Sorry, but you've got no idea what you're talking about.
You can also run OSI images, often called docker images directly via systemds nspawn. Because docker doesn't create an overhead by itself, its at its heart a wrapper around kernel features and iptables.
You didn't need docker for deployments, but let's not use completely made up bullshit as arguments, okay?
hello0904|1 year ago
That doesn't necessarily mean there aren't Pro's to Docker, but one Con to Docker is - it's absolutely overhead and complexity that is not necessary.
I think one of the most powerful features of Docker by the way is Docker Compose. This is the real superpower of Docker in my opinion. I can literally run multiple services and apps in one VPS / dedicated server and have it manage my network interface and ports for me? Uhmmm...yes please!!!! :)
otabdeveloper4|1 year ago
Yes it does, the Docker runtime (the daemon which runs under root) is horribly designed and insecure.