Pretty simple to set up and use, but I'm not sure if it's better than learning `docker` and `docker-compose`. Also, the default `python <file>.py` doesn't remove the container after the file is done running and the image is tagged as `cage/<name of project>` instead of allowing the user to tag it with their own name. Both of these are easy to fix with `docker` commands, but if this is meant to be a library that helps you avoid `docker` then it's not doing its job.
I was going to say something similar. With venv moved into the stdlib in Python3 and ongoing development and improvements to the `docker-compose` workflow I am not sure where a tool like this fits.
* venv is lighter and more or less fairly standardized
* `docker-compose` is more general and language agnostic
So something like `cage` is somewhere between...it's neat, no doubt, but not sure where I would use it personally.
Thanks for the feedback! I'll make sure to add the container removal and naming as new features.
You can indeed run all the commands easily just with Docker.
Cage aims to port all the functionalities from virtualenv to use Docker. After this is achieved I can start working on extending those functionalities.
Slightly off-topic, but does anyone develop Django/Python on a windows machine, and may want to offer any general advice or tips?
I am using virtualenv, but wondering what else I should be doing to make it easier. Previously I used cloud9 (cloud IDE) for rails development. In both cases, I am pretty much a hobbiest, but don't mind spending some money to make things easier since I'd rather spend my time doing the fun part, rather than developer ops.
I just switched from Mac to Windows 10, and I work on a lot of Django projects.
I use PyCharm (love jetbrains) + Vagrant and it's working great so far. I'd highly recommend spending the time learning Vagrant - it can be incredibly frustrating at times (on both mac and windows), but once you get it working right it's really rewarding. Being able to re-instantiate a VM when something gets really botched is a great feeling. I don't have to worry about my host machine getting messed up, so I'm free to tinker around in the VM and that speeds up my learning.
The one downside I've encountered so far is how many different "terminal" style apps I need to run to get everything working. Currently I need powershell (as admin) for spinning up the VM, putty to SSH into it, and git bash for source control. I can probably simplify them all into powershell but for now it works.
Windows is fine for Python development. It's just an executable, and you can even use cmd.exe. Some will tell you for some good reasons, and others just cause they love to hate, to use cygwin or something. You can check out Windows 10 bash as well.
If you find Windows some sort of hindrance, you can just use a Ubuntu VM, or reformat entirely to Ubuntu. I only use Ubuntu for over 2yrs now, and barely miss Windows, not much wrong with it though, esp. Windows10 w/ bash now.
At my previous company we used Vagrant for most projects, and it provided a nice abstraction layer that made things work reasonably well across all platforms (macOS, Windows, and Linux). Vagrant on Windows definitely had its challenges, though, but projects themselves ran well once that was sorted out.
I've heard the company has since moved to Docker, in search of those same benefits but with smaller overhead (specifically in terms of time spent managing the abstraction layer), and I gather the Windows folks are happier with it than they ever were with Vagrant.
My own (macOS-only) experience with both has been mixed, but certainly not worse than just running things locally via virtualenv. Vagrant introduces a full VM into the stack and Docker seems to have a lot of stability issues (at least on macOS 10.12 Sierra), so my next plan is to try combining the two, and running Docker on a Fedora VM. I'm hoping that any Docker nightmares will then at least be confined to the guest system, leaving my main machine mostly out of it, and using Docker to set up the actual project stuff may mean less Linux administration of the VM itself.
I recommend running an Ubuntu VM, with PyCharm as your IDE. There are so many little things that are more painful in Windows than in Linux.
For example, getting psycopg2 running under Cygwin was an odyssey, whereas it's trivial to install on Linux. And there are plenty of Python packages that assume you have gcc and unix headers installed, which makes chasing dependencies painful.
Hi! I develop early stages of Django apps on windows (it's probably bad practice, but I usually wait to test postgres until I am on a linux box--or if I am developing on a mac--we're oddly platform agnostic where I work). I have found Wing IDE to be incredibly productive (I've just started playing with PyCharm). I love Wing because of its debug options. You can debug templates as well as running processes....You can try Wing Professional for free to see if you like it. It's saved me a lot of time!!!
Although I'm not sure I would use this, this is the kind of stuff I've been waiting for. Where nice clean containers are used as part of an active dev cycle rather than just the production push.
The ultimate dream of the end user using a desktop comprising only of containers seems to be still a bit far off yet...
About a year ago I switched our dev environments from Vagrant to docker-compose environments with a "helper" shell script to run the most common docker-compose operations you'd do in a day. It starts up so much faster than Vagrant, it's easy to add new services, and it's easy to add and remove test data. Here's an example of the kind of setup I have:
Docker is less of an pseudo-VM and more like a linker. No matter what the environment is like on the host you can be sure that it's identical to yours within the container.
So apart from having separate disk+network namespace, how is this different from a virtualenv? I feel like configurable sockets and paths should provide everything this can, but with less overhead.
Less system resource overhead, more human overhead. VirtualEnvs can bleed over if they're not configured properly or if you forget to deactivate. They're harder to fully encapsulate and carry around (have to initiate with a special flag or there's a bunch of manual find+replace when changing paths). They sometimes end up with system dependencies (though `--no-site-packages` has been the default for a long time now).
This would be a shortcut that bypasses the need for venvs entirely. I'm not sure I would use it, but that's the difference I see.
I'd strongly suggest you contact the maintainer of the python3 package and request a backport. 3.5.1 is in testing, so it should be possible, and perhaps even easy to backport.
But seriously. If there's demand it'll get there. I've been using py3.5 on freebsd, openbsd and Archlinux for months. If enough software is written using the new hotness like Async, there will be pressure and motivation to get it packaged into stable.
I hope.
For people on RHEL, the IUS repository[0] guys do a great job.
Could someone please explain why this might be useful? It seems like something that a crypto developer might like but if I'm developing something innocuous why would it matter if it runs inside a container?
[+] [-] rockostrich|9 years ago|reply
[+] [-] gshulegaard|9 years ago|reply
* venv is lighter and more or less fairly standardized
* `docker-compose` is more general and language agnostic
So something like `cage` is somewhere between...it's neat, no doubt, but not sure where I would use it personally.
[+] [-] macostea|9 years ago|reply
You can indeed run all the commands easily just with Docker. Cage aims to port all the functionalities from virtualenv to use Docker. After this is achieved I can start working on extending those functionalities.
[+] [-] wtvanhest|9 years ago|reply
I am using virtualenv, but wondering what else I should be doing to make it easier. Previously I used cloud9 (cloud IDE) for rails development. In both cases, I am pretty much a hobbiest, but don't mind spending some money to make things easier since I'd rather spend my time doing the fun part, rather than developer ops.
[+] [-] goodoldboys|9 years ago|reply
I use PyCharm (love jetbrains) + Vagrant and it's working great so far. I'd highly recommend spending the time learning Vagrant - it can be incredibly frustrating at times (on both mac and windows), but once you get it working right it's really rewarding. Being able to re-instantiate a VM when something gets really botched is a great feeling. I don't have to worry about my host machine getting messed up, so I'm free to tinker around in the VM and that speeds up my learning.
The one downside I've encountered so far is how many different "terminal" style apps I need to run to get everything working. Currently I need powershell (as admin) for spinning up the VM, putty to SSH into it, and git bash for source control. I can probably simplify them all into powershell but for now it works.
[+] [-] eugenekolo2|9 years ago|reply
If you find Windows some sort of hindrance, you can just use a Ubuntu VM, or reformat entirely to Ubuntu. I only use Ubuntu for over 2yrs now, and barely miss Windows, not much wrong with it though, esp. Windows10 w/ bash now.
[+] [-] nkantar|9 years ago|reply
At my previous company we used Vagrant for most projects, and it provided a nice abstraction layer that made things work reasonably well across all platforms (macOS, Windows, and Linux). Vagrant on Windows definitely had its challenges, though, but projects themselves ran well once that was sorted out.
I've heard the company has since moved to Docker, in search of those same benefits but with smaller overhead (specifically in terms of time spent managing the abstraction layer), and I gather the Windows folks are happier with it than they ever were with Vagrant.
My own (macOS-only) experience with both has been mixed, but certainly not worse than just running things locally via virtualenv. Vagrant introduces a full VM into the stack and Docker seems to have a lot of stability issues (at least on macOS 10.12 Sierra), so my next plan is to try combining the two, and running Docker on a Fedora VM. I'm hoping that any Docker nightmares will then at least be confined to the guest system, leaving my main machine mostly out of it, and using Docker to set up the actual project stuff may mean less Linux administration of the VM itself.
[+] [-] theptip|9 years ago|reply
For example, getting psycopg2 running under Cygwin was an odyssey, whereas it's trivial to install on Linux. And there are plenty of Python packages that assume you have gcc and unix headers installed, which makes chasing dependencies painful.
[+] [-] throwawayish|9 years ago|reply
Personally I find the various CLI tools typically found on a BSD or Linux rather enticing for development, but many people do without that.
Apart from that I'm not sure what you could mean / what your issues are. Specify?
[+] [-] ylem|9 years ago|reply
[+] [-] AzMoo_|9 years ago|reply
[+] [-] jdc|9 years ago|reply
[+] [-] crazyhatfish|9 years ago|reply
The ultimate dream of the end user using a desktop comprising only of containers seems to be still a bit far off yet...
[+] [-] bschwindHN|9 years ago|reply
https://github.com/bschwind/api-starter-kit/blob/master/help...
What's funny is I don't currently use docker in production, but I love it for dev environments.
[+] [-] civodul|9 years ago|reply
I work on Guix; it provides a 'guix environment' command, which is sort of a generic 'virtualenv', and IMO is a simple lightweight alternative: https://gnu.org/s/guix/manual/html_node/Invoking-guix-enviro... .
[+] [-] symlinkk|9 years ago|reply
[+] [-] Spivak|9 years ago|reply
[+] [-] t0mbstone|9 years ago|reply
[+] [-] brudgers|9 years ago|reply
[+] [-] macostea|9 years ago|reply
[+] [-] viraptor|9 years ago|reply
[+] [-] cookiecaper|9 years ago|reply
This would be a shortcut that bypasses the need for venvs entirely. I'm not sure I would use it, but that's the difference I see.
[+] [-] scott113341|9 years ago|reply
[+] [-] rch|9 years ago|reply
It might be worth looking at pipfile.
Great project though!
[+] [-] miloshadzic|9 years ago|reply
[+] [-] alrs|9 years ago|reply
[+] [-] artlogic|9 years ago|reply
[+] [-] oblio|9 years ago|reply
[+] [-] dijit|9 years ago|reply
But seriously. If there's demand it'll get there. I've been using py3.5 on freebsd, openbsd and Archlinux for months. If enough software is written using the new hotness like Async, there will be pressure and motivation to get it packaged into stable.
I hope.
For people on RHEL, the IUS repository[0] guys do a great job.
[0] https://ius.io/
[+] [-] adamrt|9 years ago|reply
[+] [-] sandGorgon|9 years ago|reply
[+] [-] 0wl3x|9 years ago|reply
[+] [-] apetresc|9 years ago|reply