mkdir my_project_directory
cd my_project_directory
export PIPENV_VENV_IN_PROJECT=1 (To make the virtual environment folder determininstic(.venv/) otherwise you will get a hash based directory(my_project_directory-some-hash-value) which might not be suitable for automatic deployments in applications like docker. I don't know why this is not default.)
pipenv --python 3.6 (or any particular version number)
pipenv install numpy scipy pandas matplotlib requests
pipenv graph (Gives me a dependency graph)
git add .
git commit -a -S -m "init"
git push
Is this workflow not enough? I have recently started using pipenv after a lot of struggle. The only issue I have is, Pycharm doesn't allow native pipenv initialisation. I always end up creating an environment manually and then importing the project. Pycharm does detect the environment though.
After having used both, I'm not yet sure which is better of `pipenv` or `pip install --require-hashes` + `python -m venv`. For example, `pipenv sync` doesn't uninstall packages which were previously in the same Pipfile{,.lock}, making the sharing of Pipfile{,.lock} via version control kinda pointless. `PIPENV_VENV_IN_PROJECT` not being the default is also annoying for development.
There doesn't seem to be really fast way to check if everything is up to date or not:
$ pipenv install numpy scipy pandas matplotlib requests
....
....installs everything
....
$ time pipenv sync
Installing dependencies from Pipfile.lock (3f6ae1)…
15/15 — 00:00:05
All dependencies are now up-to-date!
real 0m7.219s
user 0m15.645s
sys 0m1.406s
Why does it take so long just to check a bunch of hashes? Is there a better command?
Last time I tried, it also required that the target Python version be installed somewhere on the path. If pipenv used venv instead of virtualenv, something like pyenv to retrieve/install Python versions, and was distributed as a full executable (rather than requiring a bootstrapped Python) I would actually it.
pipenv has a huge issue that they refuse to fix: no init command. That means you can only run pipenv commands from the root directory of your project. If you accidentally run pipenv install X in a subdirectory, guess what? You just created a new Pipfile and virtualenv!
npm actually got this right, init helps, and it makes sense to traverse up directories to find a package.json.
As far as I can tell, the main difference is that this also uses pyenv to manage python versions separate from system python packages. There was an article a couple of weeks ago about combining pyenv + pipenv, and this doesn't really seem to add anything over that combination except an opinionated wrapper script.
I feel that all of these language specific solutions still only solve halve the problem. Your code depends on a lot more than _just_ the python libraries. And often this is exactly what makes projects break on different systems.
Let me make another suggestion: nixpkgs [0] it helps to define exactly that fixed set of dependencies. Not just on published version number, but on the actual source code _and_ all it's dependencies.
Here we go again. The source of the problems in in toy package managers (and I include all language package managers here) is not just the package managers themselves, it's the "version soup" philosophy they present to the user. Not daring to risk displeasing the user, they will take orders akin to "I'd like version 1.2.3 of package a, version 31.4.1q of package b, version 0.271 of package c, version 141 of package d...", barely giving a thought to inter-version dependencies of the result.
Unfortunately, software does not work this way. You cannot just ask for an arbitrary combination of versions and rely on it to work. Conflicts and diamond dependencies lurk everywhere.
Sensible package systems (see specifically Nix & nixpkgs) have realized this and follow a "distribution" model where they periodically settle upon a collection of versions of packages which generally are known to work pretty well together (nixpkgs in particular tries to ensure packages' test suites pass in any environment they're going to be installed in). A responsible package distribution will also take it upon themselves to maintain these versions with (often backported) security fixes so that it's no worry sticking with a selection of versions for ~6 months.
However, I can't say I'm particularly surprised that these systems tend to lose out in popularity to the seductively "easy" systems that try to promise the user the moon.
Some background:
A few months back I was curious about the nix style of packaging so I setup a python project using nix via nixpkgs' pythonPackages. This worked pretty well, but I kept wondering to myself if it was superior to explicitly declaring each version of a package via npm, cargo, bundler, etc.
The way to "freeze" dependencies seemed to involve using a specific git sha of nixpkgs.
From the point of view of a nix newbie, it seems that by relying on nixpkgs to remain relatively stable, you are at the mercy of your package maintainers who might introduce a backwards incompatible change resulting in a build breaking.
One of the alternatives to this was to essentially copy the nix package descriptions from nixpkgs to a projects repo to ensure that packages are explicitly declared.
At this point, it felt as though I was maintaining a .lock file by hand.
Do you think nixpkgs without declaring its specific version i.e., just use pythonPackages.numpy is the best way to use nix for dependency management?
There's been quite a bit of discussion about Anaconda and conda in this thread already. Anaconda also takes this distribution approach, and it's targeted specifically at python.
Using a local virtual environment and then building a Docker image removes most of the headaches. I also bundle a Makefile with simple targets. See this as an example: https://github.com/zedr/cffi_test/blob/master/Makefile
New projects are created from a template using Cookiecutter.
It isn't really so bad in 2018, but I do have a lot of scars from the old days, most of them caused by zc.buildout.
The secret is using, as the article mentions, a custom virtual env for each instance of the project. I never found the need for stateful tooling like Virtualenvwrapper.
You can also set a PYTHONUSERBASE environment variable (and `pip install --user`) to scope the installed packages to the project's directory. This is effectively the same as a virtualenv, but doesn't have the requirement on bash or "activation", and it's less magical than virtualenv because these choices are explicit on each command. The tradeoff is that it can be tedious to be explicit, remembering to use `--user` and specify the PYTHONUSERBASE. If you're scripting everything via make, though, then that's not such a burden.
"Pipfile looks promising for managing package dependencies, but is under active development. We may adopt this as an alternative if/when it reaches maturity, but for the time being we use requirements.txt."
If I where given the choice between community supported/in development Pipfile/pipenv or the 3rd party supported yet-another-package-manager lore to get those best practices my money would be on Pipfile/pipenv. I've been using it for many project now and besides some minor annoyances (eg: the maintainer's love for color output that is not form follow function) it has been a great tool.
I'm not sure why the scientists don't use VMs and simply save the virtual disk files? That would at the very least allow them to verify the settings at a later date. Fresh install reproducibility doesn't seem necessary to verify experimental findings as long as the original vm is available to boot up.
1. Integrating the development environment on their host PC (for example connecting RStudio in R's case, or connecting their web browser back to a server running in the VM in the case of Jupyter) is another set of skills to master.
2. Many data analyses are memory hungry unless you want to resort to coding practices that optimize for memory consumption. The overhead of running a VM is a bummer for some scientists.
3. Many scientists are not using Linux top-to-bottom, and therefore don't have a great way of virtualizing a platform that they are familiar with (e.g. Windows, macOS)
Can people think of others? I'm sure I'm missing some.
(EDIT: To be clear, I think VMs are a great path, but I do think there are some practical reasons why some scientists don't use them)
This has the added benefit of letting you encode the system dependencies (OS packages) for library build time and for run time.
Docker images are also a great way to distribute Python CLI tools, certainly far better than installing via pip which either pollutes global state or is confined to a certain project's virtualenv.
Bingo. If you’re not vendoring the binaries of your dependencies as part of a release then you’re doing it wrong.
It doesn’t have to be docker, containers just makes it easy to have immutable snapshots. Anything that packages it all up (including a simple tarball) is enough.
genuine question - is nobody using anaconda/conda in production ? I have found the binary install experience in conda far more pleasant than in anything else.
I use miniconda in production, and it's awesome. It's on par with (or even better than) npm except perhaps on the number of packages in the repository, supports pip, does everything I need and then some.
I'm baffled myself at the anaconda-blindness in the general crowd, which is evident every single time this comes up for discussion.
What happens when there isn't a conda recipe for some package or inexplicably some dependency? Do I go back to pip? sudo pip ;) ? Use virtualenv?? Nothing is ever solved.......
Interesting reading, I share some of the points in the post, however, one more dependency manager?
Mostly I've used plain `python -m venv venv` and it always worked well. A downside - you need to add a few bash scripts to automate typical workflow for your teammates.
Another point is that it does not work well with PyCharm and does not allow to put all dependencies into the project folder as I used to do with venv. (just like to keep everything in one folder to clean up it easily)
Are there any better practices to make life easier?
Actually, I recommend bash scripts for automating team workflows as a best practice.
You create a wrapper script around your application that calls a dev environment set-up script, that [if it wasn't done yet] sets up the environment from scratch for that project or application, and loads it before running your application. This does a couple things.
First, it removes the need to train anyone on using your best practices. The process is already enshrined in a version-controlled executable that anyone can run. You don't even need to 'install lore' or 'install pipenv' - you just run your app. If you need to add documentation, you add comments to the script.
Second, there's no need for anyone to set up an environment - the script does it for you. Either set up your scripts to go through all the hoops to set up a local environment with all dependencies, or track all your development in a Docker image or Dockerfile. The environment's state is tracked by committing both the process scripts and a file with pinned versions of dependencies (as well as the unpinned versions of the requirements so you can occasionally get just the latest dependencies).
Third, the pre-rolled dev environment and executable makes your CI-CD processes seamless. You don't need to "set up" a CI-CD environment to run your app. Just check out the code and run the application script. This also ensures your dev environment setup scripts are always working, because if they aren't, your CI-CD builds fail. Since you version controlled the process, your builds are now more reproducible.
All this can be language-agnostic and platform-agnostic. You can use a tool like Pipenv to save some steps, but you do not need to. A bash script that calls virtualenv and pip, and a file with frozen requires, does 99% of what most people need. You can also use pyenv to track and use the same python version.
> Another point is that it does not work well with PyCharm and does not allow to put all dependencies into the project folder as I used to do with venv.
This is annoying for AWS lambdas too, because you have to bundle the dependencies and zip it. It's pretty trivial to go Pipfile -> requirements.txt -> pip install -t if you use a Makefile, but it's definitely an omission. I asked about it on their github though and it is a known issue, hopefully it'll be there soon.
I bitch a lot about npm, but then I remember that time when python's package distribution drove me to learn a new language. I can't help but notice that TFA and all the comments here are only talking about one end of this: managing your dev environment. Is there a similar work explaining how to distribute python packages in a straightforward manner? Is that article compatible with this one?
The justification was that the Anaconda installer is too heavy. The kitchen sink Anaconda installer is not designed for the author's use case. Miniconda is the provided way to bootstrap conda onto a system.
Version pinning is technical debt and a fool's errand. New versions will always come out and your new development is confined to what once worked. You need to keep testing with current versions to see what will break when you upgrade and fix it as soon as possible so as to minimize the odds of a big breaking change.
It may keep your environment stable for some time, but that stability is an illusion because the whole world moves on. You may be able to still keep your Python 2.2 applications running on Centos 3 forever, but you shouldn't want to do it.
One things that comes to my mind is: when I was starting using Python, I was eager to mock Java people and their absurd approach (write everything in Java, specify a full classpath for all dependency, etc). I pointed out as it was easy and quick to program in Python rather than in Java.
I did not appreciate what the pros of a linear and well-defined (by the language) approach to the dependencies, and a clear API between the system libraries (java, javax) vs the user libraries, actually gives A LOT of value. Even though it's more cumbersome to use.
It looks like tech.instacart.com is hosted on Medium. The redirect is part of the auth flow. If you have a Medium account, you would have logged in to medium.com, not tech.instacart.com. If you don't have a Medium account, Medium still will want to add first-party tracking information to your interaction with tech.instacart.com and all other Medium properties. So this client-side redirect flow enables them to capture that association.
This is presumably what the `gi=85c0588ca374` query parameter is in the follow-on redirect. I would guess that `gi` stands for "global identity" or something.
I ran into a migraine last week: cleaning up requirements.txt
How do you determine which requirements are no longer needed when you remove one from your code? In node, your package.json lists only packages YOU installed. So removing them cleans up their dependencies. But in Python, adding one package with pip install might add a dozen entries, none indicating they're dependencies of other packages.
At most projects we're using pip-tools which generates a fully pinned requirements.txt based on a manually kept (and clean) requirements.in which only contains the specific packages you need without their dependencies
We use a separate file to list the direct dependencies, 'ddeps.txt' and 'ddeps-dev.txt' for development deps.
Once we update one of these files a clean venv is created, the dependencies installed and the freeze output saved as requirements.txt.
Then the dev dependencies are installed and the output of that freeze is saved to requirements-dev.txt.
This preserves the dependencies where we made the conscious choice to require them and also allows us to explicitly vet any new dependencies and versions.
I’m not sure about other people, but that is how I use requirements.txt. You don’t have to dump the entire output of pip freeze in there. You can just list the dependencies you want.
It really bothers me that they're skipping these two as separate steps. Track "what I asked for", use "what I ended up with" for deployment. Otherwise you're just saying "use pip freeze" regardless of wrapping magic around it.
If you're already down that road, pipdeptree is your friend. It will resolve your frozen packages to at least tell you which are top-level and which are dependencies-of-dependencies. There are still exceptions if you're using a dependency both directly and via another module, but having a requirements.in from the pipdeptree parents will have you covered.
Get that list, set them all to module>=version in development, pip install -r requirements.in, then pip freeze > requirements.txt to get hard version locks for deployment.
As others have stated, pip-tools handles this separation for you.
Thanks for the link. That looks interesting; I'll have to give that a try. When I started reading the link my first thought was Pex from Twitter. I don't know how comparable XAR is to Pex but it's worth a look to compare the two.
Since we're sharing XKCD cartoons, here's one that comes to mind: https://xkcd.com/927/
So not to disappoint, here's another contestant: Poetry [0]
That said, in my experience it works best if don't force any particular workflow on your developers, but maintain a solid and repeatable process for testing and deployment. People have different mental models of their development environments -- I personally use virtualfish (or virtualenvwrapper if I'm on Bash), while a colleague works with `python -m venv`; and we have played with pipenv, pyenv, anaconda and poetry in various cases.
As long as your requirements are clearly defined -- requirements.txt works perfectly well for applications, and setup.py for libraries [1] -- any method should be good enough to build a development environment. On the other hand, your integration, testing and deployment process should be universal, and fully automated if possible, and of course independent of any developer's environment.
As a form of version pinning, this locks in old versions and creates technical debt. A few years downstream, you're locked into library modules no longer supported and years behind in bug fixes.
The joy of not having to deal with broken production builds when dependencies change under your feet is well worth the "technical debt" in my opinion. Reproducible builds are valuable in their own right.
We’ve recently went through this process at our company & chose to use pipenv as the dependency management tool. As mentioned in the article, pipenv is under active development but takes care of many things that we had custom scripts before such as requirements hashs, in-built graph of dependencies, automatic retries of failed dependencies, automatic re-ordering of dependency installations etc. it also has a few quirks - we had to pick a version that had most commands working & also pipenv install is painfully slow & didn’t seem to have a caching strategy for already built virtualenvs.
Doesn't using requirments.txt not account for (I forget the official name) Double Dependencies, you dependencies in requirements.txt might have a dependency whose version number may change over time.
This seems like something pip freeze could handle but doesn't.
ruby practices based around bundler aren't perfect, but they did solve _this_ level of problem ~7 years ago.
It remains a mystery to me why python seems to have won the popularity battle against ruby. They are very similar languages, but in all ways they differ ruby seems superior to me.
abhishekjha|7 years ago
On local :
On remote : Is this workflow not enough? I have recently started using pipenv after a lot of struggle. The only issue I have is, Pycharm doesn't allow native pipenv initialisation. I always end up creating an environment manually and then importing the project. Pycharm does detect the environment though.crdoconnor|7 years ago
https://www.reddit.com/r/Python/comments/8elkqe/pipenv_a_gui...
Personally I think poetry doesn't get enough visibility. It's not as hyped as pipenv but it feels a bit nicer:
https://poetry.eustace.io/
l0b0|7 years ago
akarambir|7 years ago
leg100|7 years ago
mkobit|7 years ago
yrro|7 years ago
After discovering PYTHONUSERBASE, I no longer need any of the plethora of wrappers around venv/virtualenv.
ergo14|7 years ago
sethgecko|7 years ago
iooi|7 years ago
npm actually got this right, init helps, and it makes sense to traverse up directories to find a package.json.
NoGravitas|7 years ago
devxpy|7 years ago
pferretti|7 years ago
CiTyBear|7 years ago
bjpbakker|7 years ago
Let me make another suggestion: nixpkgs [0] it helps to define exactly that fixed set of dependencies. Not just on published version number, but on the actual source code _and_ all it's dependencies.
[0] - https://nixos.org/nixpkgs/
abakus|7 years ago
ris|7 years ago
Unfortunately, software does not work this way. You cannot just ask for an arbitrary combination of versions and rely on it to work. Conflicts and diamond dependencies lurk everywhere.
Sensible package systems (see specifically Nix & nixpkgs) have realized this and follow a "distribution" model where they periodically settle upon a collection of versions of packages which generally are known to work pretty well together (nixpkgs in particular tries to ensure packages' test suites pass in any environment they're going to be installed in). A responsible package distribution will also take it upon themselves to maintain these versions with (often backported) security fixes so that it's no worry sticking with a selection of versions for ~6 months.
However, I can't say I'm particularly surprised that these systems tend to lose out in popularity to the seductively "easy" systems that try to promise the user the moon.
emptysea|7 years ago
The way to "freeze" dependencies seemed to involve using a specific git sha of nixpkgs.
From the point of view of a nix newbie, it seems that by relying on nixpkgs to remain relatively stable, you are at the mercy of your package maintainers who might introduce a backwards incompatible change resulting in a build breaking.
One of the alternatives to this was to essentially copy the nix package descriptions from nixpkgs to a projects repo to ensure that packages are explicitly declared. At this point, it felt as though I was maintaining a .lock file by hand.
Do you think nixpkgs without declaring its specific version i.e., just use pythonPackages.numpy is the best way to use nix for dependency management?
kalefranz|7 years ago
zedr|7 years ago
It isn't really so bad in 2018, but I do have a lot of scars from the old days, most of them caused by zc.buildout.
The secret is using, as the article mentions, a custom virtual env for each instance of the project. I never found the need for stateful tooling like Virtualenvwrapper.
jbergknoff|7 years ago
aequitas|7 years ago
If I where given the choice between community supported/in development Pipfile/pipenv or the 3rd party supported yet-another-package-manager lore to get those best practices my money would be on Pipfile/pipenv. I've been using it for many project now and besides some minor annoyances (eg: the maintainer's love for color output that is not form follow function) it has been a great tool.
michaelmcmillan|7 years ago
When starting a new project:
When running that project: Many people don't realize that the venv/bin/ contains all the relevant binaries with the right library path's out of the box.Rjevski|7 years ago
spapas82|7 years ago
That (with the addition of using mkvirtualenv and friends) is the workflow I use to both dev and prod and am really happy with!
pytyper2|7 years ago
peatmoss|7 years ago
1. Integrating the development environment on their host PC (for example connecting RStudio in R's case, or connecting their web browser back to a server running in the VM in the case of Jupyter) is another set of skills to master.
2. Many data analyses are memory hungry unless you want to resort to coding practices that optimize for memory consumption. The overhead of running a VM is a bummer for some scientists.
3. Many scientists are not using Linux top-to-bottom, and therefore don't have a great way of virtualizing a platform that they are familiar with (e.g. Windows, macOS)
Can people think of others? I'm sure I'm missing some.
(EDIT: To be clear, I think VMs are a great path, but I do think there are some practical reasons why some scientists don't use them)
coldtea|7 years ago
nickjj|7 years ago
2. Develop application
3. Repeat 1-2 until ready to deploy
4. Run Docker image in production with same dependencies as development
5. ??
6. Profit!
As long as you don't rebuild in between steps 3-4, you'll have the same set of dependencies down to the exact patch level.
jbergknoff|7 years ago
Docker images are also a great way to distribute Python CLI tools, certainly far better than installing via pip which either pollutes global state or is confined to a certain project's virtualenv.
koolba|7 years ago
It doesn’t have to be docker, containers just makes it easy to have immutable snapshots. Anything that packages it all up (including a simple tarball) is enough.
minitech|7 years ago
sandGorgon|7 years ago
Going forward, the trend is going to be pipenv+manylinux (https://github.com/pypa/manylinux), but conda is super pleasant today
beagle3|7 years ago
I'm baffled myself at the anaconda-blindness in the general crowd, which is evident every single time this comes up for discussion.
virusduck|7 years ago
infinite8s|7 years ago
rb808|7 years ago
tamatsyk|7 years ago
Mostly I've used plain `python -m venv venv` and it always worked well. A downside - you need to add a few bash scripts to automate typical workflow for your teammates.
Pipenv sounds great but there are some pitfalls as well. I've been going through this post recently and got a bit upset about Pipenv: https://chriswarrick.com/blog/2018/07/17/pipenv-promises-a-l...
Another point is that it does not work well with PyCharm and does not allow to put all dependencies into the project folder as I used to do with venv. (just like to keep everything in one folder to clean up it easily)
Are there any better practices to make life easier?
peterwwillis|7 years ago
You create a wrapper script around your application that calls a dev environment set-up script, that [if it wasn't done yet] sets up the environment from scratch for that project or application, and loads it before running your application. This does a couple things.
First, it removes the need to train anyone on using your best practices. The process is already enshrined in a version-controlled executable that anyone can run. You don't even need to 'install lore' or 'install pipenv' - you just run your app. If you need to add documentation, you add comments to the script.
Second, there's no need for anyone to set up an environment - the script does it for you. Either set up your scripts to go through all the hoops to set up a local environment with all dependencies, or track all your development in a Docker image or Dockerfile. The environment's state is tracked by committing both the process scripts and a file with pinned versions of dependencies (as well as the unpinned versions of the requirements so you can occasionally get just the latest dependencies).
Third, the pre-rolled dev environment and executable makes your CI-CD processes seamless. You don't need to "set up" a CI-CD environment to run your app. Just check out the code and run the application script. This also ensures your dev environment setup scripts are always working, because if they aren't, your CI-CD builds fail. Since you version controlled the process, your builds are now more reproducible.
All this can be language-agnostic and platform-agnostic. You can use a tool like Pipenv to save some steps, but you do not need to. A bash script that calls virtualenv and pip, and a file with frozen requires, does 99% of what most people need. You can also use pyenv to track and use the same python version.
mcintyre1994|7 years ago
This is annoying for AWS lambdas too, because you have to bundle the dependencies and zip it. It's pretty trivial to go Pipfile -> requirements.txt -> pip install -t if you use a Makefile, but it's definitely an omission. I asked about it on their github though and it is a known issue, hopefully it'll be there soon.
jessaustin|7 years ago
superbatfish|7 years ago
Conda really is the tool he wants; he just seems not to understand that.
cuchoi|7 years ago
kalefranz|7 years ago
The justification was that the Anaconda installer is too heavy. The kitchen sink Anaconda installer is not designed for the author's use case. Miniconda is the provided way to bootstrap conda onto a system.
rbanffy|7 years ago
It may keep your environment stable for some time, but that stability is an illusion because the whole world moves on. You may be able to still keep your Python 2.2 applications running on Centos 3 forever, but you shouldn't want to do it.
erik_seaberg|7 years ago
xycco|7 years ago
https://github.com/jazzband/pip-tools
Alex3917|7 years ago
alanfranzoni|7 years ago
I did not appreciate what the pros of a linear and well-defined (by the language) approach to the dependencies, and a clear API between the system libraries (java, javax) vs the user libraries, actually gives A LOT of value. Even though it's more cumbersome to use.
syoc|7 years ago
pcl|7 years ago
This is presumably what the `gi=85c0588ca374` query parameter is in the follow-on redirect. I would guess that `gi` stands for "global identity" or something.
Waterluvian|7 years ago
How do you determine which requirements are no longer needed when you remove one from your code? In node, your package.json lists only packages YOU installed. So removing them cleans up their dependencies. But in Python, adding one package with pip install might add a dozen entries, none indicating they're dependencies of other packages.
tleguijt|7 years ago
ntnn|7 years ago
We use a separate file to list the direct dependencies, 'ddeps.txt' and 'ddeps-dev.txt' for development deps.
Once we update one of these files a clean venv is created, the dependencies installed and the freeze output saved as requirements.txt. Then the dev dependencies are installed and the output of that freeze is saved to requirements-dev.txt.
This preserves the dependencies where we made the conscious choice to require them and also allows us to explicitly vet any new dependencies and versions.
neuland|7 years ago
web007|7 years ago
If you're already down that road, pipdeptree is your friend. It will resolve your frozen packages to at least tell you which are top-level and which are dependencies-of-dependencies. There are still exceptions if you're using a dependency both directly and via another module, but having a requirements.in from the pipdeptree parents will have you covered.
Get that list, set them all to module>=version in development, pip install -r requirements.in, then pip freeze > requirements.txt to get hard version locks for deployment.
As others have stated, pip-tools handles this separation for you.
textmode|7 years ago
Is there some commercial advantage?
Why not just post the medium url
https://medium.com/p/f1076d625241
This 302 redirects to tech.instacart.com
dorfsmay|7 years ago
https://code.fb.com/data-infrastructure/xars-a-more-efficien...
jungleai|7 years ago
This is an excellent post to get started http://sevag.xyz/post/xar/
handruin|7 years ago
BerislavLopac|7 years ago
So not to disappoint, here's another contestant: Poetry [0]
That said, in my experience it works best if don't force any particular workflow on your developers, but maintain a solid and repeatable process for testing and deployment. People have different mental models of their development environments -- I personally use virtualfish (or virtualenvwrapper if I'm on Bash), while a colleague works with `python -m venv`; and we have played with pipenv, pyenv, anaconda and poetry in various cases.
As long as your requirements are clearly defined -- requirements.txt works perfectly well for applications, and setup.py for libraries [1] -- any method should be good enough to build a development environment. On the other hand, your integration, testing and deployment process should be universal, and fully automated if possible, and of course independent of any developer's environment.
[0] https://github.com/sdispater/poetry
[1] https://caremad.io/posts/2013/07/setup-vs-requirement/
Animats|7 years ago
As a form of version pinning, this locks in old versions and creates technical debt. A few years downstream, you're locked into library modules no longer supported and years behind in bug fixes.
yen223|7 years ago
chocks|7 years ago
Wheaties466|7 years ago
This seems like something pip freeze could handle but doesn't.
ausjke|7 years ago
jrochkind1|7 years ago
It remains a mystery to me why python seems to have won the popularity battle against ruby. They are very similar languages, but in all ways they differ ruby seems superior to me.
kalefranz|7 years ago
AstralStorm|7 years ago
Here's to Python 4 actually fixing this mess.
unknown|7 years ago
[deleted]
Alir3z4|7 years ago
That's all.
We Python developers are fortunate to have amazing tools such as pip, virtualenv, etc.
alanfranz|7 years ago
avip|7 years ago
kanox|7 years ago