(no title)
sebra | 2 years ago
My approach is to just use Docker, no virtualenvs. I get that you might run into the multiple interpreters issue in theory but across multiple projects in the past 5 years I haven't seen that. Also, this might no longer be true but avoid using Alpine. If you're deploying Django there is no reason to optimize image size and Alpine has a lot of things which are missing (i.e. at least a couple of years ago, wheels where not supported leading very slow build times).
I only do a single requirements.txt. Anything which makes your prod and local environment differ is a bad thing in my opinion. Fine black might make my image a couple of mbs larger but why would it matter? On the other hand attempting to figure out why something which works on my machine does not work on prod is always a nightmare.
Setting requirements as a range in requirements.txt allows me to automatically get bugfixes without spending time on it (e.g. django>=4.2.3,<4.2.99 django-ninja>=1.0.1,<1.0.99) Again, I might have run into 1-2 issues over the past couple of years from this and I've saved a lot of time.
Getting a project running locally should not take more than 1 minute (a couple of .env vars + docker-compose up -d should be enough).
The biggest practical issue in dependency management in python is dependencies not pinning their dependencies correctly.
dagw|2 years ago
Unless you're writing code that only you will deploy to machines that you control, "prod" and "local" will always be different. If you're only targeting a fixed version of a fixed OS on a fixed architecture, then most things are easy.
For me "local" is a Mac running ARM, for the person pip installing my tool "prod" might be Linux or Windows. I cannot punt (or I can, but it would greatly diminish the usefulness of the stuff I develop) and say "your prod must equal my local or it won't work", I have to deal with it and I want tools that make this hard problem as easy as possible.