Personally I'd prefer going with NixOS to achieve the same result. that way you don't even need a docker installation. As a bonus you can actually install the Nix package manager on osx if you're not into Linux (and this way there is no need for virtualization if you're on a Mac)
I love the proposition, but frankly it seems to take too much time to learn even the basics. It would be awesome for someone to build a Docker-like experience on top of Nix, though.
So you just shifted your dependency from Docker to Nix. It might be more fun and an intetesting learning experience but also it's more complicated (or at least it's not that widely used like Docker is)
It looks impressive but it isn't clear how much you have to pay for services from them. It isn't free and you aren't in control. Your snapshots and abilities to rollback etc are likely to be dependent on their storage servers.
They certainly should monetise, but not making it clear is what I object to. I've raised as an issue for clarification in their community wiki.
Hmm i don’t think you can compare IntelliJ which is a fully featured IDE with refractor functionalities, full text search, debugger and so on to a vim setup with plugins in docker container, just because they both edit text. This what the author did at the end of the write up. It’s like comparing jQuery to NodeJS, yeah they both generally are for JavaScript but serve a different purpose.
I agree, the title of the article is a bit misleading. In general, you’d still need an IDE installed on the host machine, which can then connect to a runtime on the container. With VS Code and remote containers, it’s quite easy.
Alternatively, maybe it’d be possible to have the container expose an IDE over http (possibly vscode through the browser?).
"Choosing a base image can be quite daunting. I’m always a fan of Alpine Linux for my application containers, so that’s what I chose."
Just be aware that means the musl libc, which is often fine, but not always. Software that expects glibc can crash or have unpredictable behavior. The JVM is a good example, unless you get a JVM that was originally built against musl.
And sometimes also issues with busybox, where it differs from other implementations of the same tools.
Yeah, this is why I use debian slim as an image for most projects unless I'm prioritizing small size.
Its small and popular enough that a lot of other images already use it and chances are that you dont have to redownload it.
Does anyone use docker for full-fledged development on OSX? I am a Linux user and tried setting up a dev environment for my colleagues on OSX but file system I/O was extremely slow and completely unusable.
Yep, I use it. There are two tricks to mitigate this:
1. Using a :delegated or :cached flag when using a bind mount can speed it up a bit
2. For folders that need a lot of RW, but don’t need to be shared with the host (think node_modules or sbt cache), I bind a docker volume managed by docker-compose. This makes it extremely fast. Here's an example: https://gist.github.com/kamac/3fbb0548339655d37f3d786de19ae6...
IIRC there are some mount options that might help if you search the docker docs, but for me I just create some local docker volumes to hold my code and mount those instead of mounting host folders. It feels a little weird and unnatural that your code is 'hidden' in docker's volume folder (under /var/lib/docker/volumes) in the VM instead of in a folder on your host machine. But it gets you into more of a mindset that this is just a temporary checkout and the real persistent home for the code is your source repository (github, etc.), so you don't let things linger without being checked into a branch somewhere.
We ran into this as well. The I/O would cripple the system. It seems like if you have a compiled language in which the sources are compiled into a single binary that runs on your Docker VM, things are not so bad, but in an interpreted language with the sources on the host’s disk and the interpreter running on the guest, the VM needs to reach across that host/guest VM boundary all the time. We also tried all the tricks to speed it up, but we ultimately gave up and just used native MacOS processes.
Yes I do. I am trying to develop on OSX since a few days, using Docker. But between the low amount of ram on the laptop and the bad IOs performances I decided to give another try to Github Codespaces and I'm very pleasantly surprised. It feels fast enough and I can switch from computer without thinking about it.
Go the next step and run a local kubernetes cluster with kind or k3s (it will take you 30 seconds to have a k8s cluster going). IMHO the kubectl CLI is a lot more logical than docker's CLI. You can create all your local storage volumes ahead of time, create a pod that attaches to it, and then just kubectl exec into the pod vs. writing a long fiddly docker command line string (or crafting a docker-compose.yml). It's easy to adjust the pod as necessary while it runs too, like adding a service to expose ports without rerunning the container.
But if you do like the idea of docker dev environemnts, check out a tool like batect: https://github.com/batect/batect It's somewhat like if docker-compose had make-like commands you could define. Your whole dev environment and workflow can be defined in a simple yaml config that anyone can use.
Won’t setting up a k8s cluster require writing resource definitions? I imagine you’d need to write a statefulset. How’s that better than writing a docker-compose?
I’m not sure how vscode does it, but it allows you to publish ports in real time as well.
Ashley Broadley's github page at https://github.com/ls12styler sadly doesn't contain a repo with his rust dev work to date (I will ask him as it has some really good stuff in the article.)
----
Very nice. I'm doing similar at the moment. Maybe take a look at
Cool stuff. A few months ago I looked into building a CITRIX alternative based on the idea that you would run application frontends in a local secure docker container while running the backend in the cloud. E.g. you could run VSCode locally while actually compiling in a kubernetes cluster. I eventually ruled out the idea for business reasons, but from a technical perspective it's doable and probably useful. At the time I thought the primary advantage would be reduced cloud costs and lower latency.
Until you reimplement every language as transpilers to elisp, that's not the same thing at all. In this respect, Emacs is actually in the same tool category as VSCode or Vim. As you'd set up Dev Containers in VSCode, you'd set up TRAMP in Emacs to ssh into a Docker environment, or (more likely for Emacsians I guess) access a Guix environment or Nix shell.
Nice, similar thing to what I do, a few more things:
1. If you want X11 (haven't figured out audio yet)
"-e DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix:ro"
2. Firefox
> --shm-size=2g
and start with: firefox --no-remote
3. Entering container
Just map a command to enter the container with the name as parameter / optional image type as second.
That way you get a new fresh environment whenever you want
Command would just start if it's not started or exec otherwise. I go extra length to have docker start up a ssh server inside container and I just ssh into it.
For audio, maybe you can use pavucontrol on your host to make pulseaudio accept network connections, and then use PULSE_SERVER=your_host_ip command_which_plays_audio in the docker container? (I haven't tried it).
That is a nice write up. We use a very similar set up to containerise our embedded tool chains.
Additionally, we use a wrapper script to symlink the current containerised project to the same location in the host system. This ensures, that outputs in the containerised environment points to valid paths in the host:
E. g.:
docker mount:
/home/me/dev/proj1234 > /workspace
symlink in host:
ln -sfn /home/me/dev/proj1234 /workspace
I run VSCode and Eclipse Theia in Docker containers and access them via browser. Depending on what I do I start either with a Java, Node, or Python container.
I like the idea of using k8s as suggested upthread. I just have not had much time to push changes / work on it recently. One thing worth thinking about is i have moved to podman - seems a lot slower to start up but is user space which seems sensibke
When I run docker on my laptop, the fans turn on full speed and stay there (recent macbook pro). I kill docker any time I don't expect to use it in the next hour. If I'm not on AC power, it halves my battery life. I wouldn't even consider keeping docker running constantly. Is this not the normal experience?
Can you run ansible or terraform inside a docker container?
There are two parts of the dev environment - the programmer preferences and the project libraries and other infrastructure. What I would like is to have a way to compose those two and ideally something that would work the same way inside a docker container as in a full VM.
To provision stuff _inside_ your docker container from ansible I've found packer is the easiest way to do it: https://www.packer.io/docs/provisioners/ansible-local There was apparently a tool called ansible-bender that did something similar but was abandoned. Packer makes it easy to define a container that's provisioned with a local ansible playbook.
Ultimately though I think using ansible with containers is a code smell. If you provision in a container with ansible you have to pull in an entire Python install, and that blows up your container size fast. You can do multi-stage builds and carefully extract the stuff you need but it's a real pain. IMHO minimal shell scripts that run in very tightly defined and controlled environments of the dockerfile (i.e. they don't have to be super flexible or support any and every system, just this container) are the way to go.
I have a co-worker who had the idea of stuffing Ansible into a container. This would allow anyone to easily run any Ansible playbooks without having to deal with dependencies and versions. It’s absolutely terrible to use. You end up having wrapper scripts to make it even remotely usable.
Mounting things in the right locations is a nightmare, even minor changes becomes a hassel. For Ansible, just learn to use virtualenvs.
I only get a new laptop once every several years. Doesn't really seem worth it to me personally. I also sort of like starting fresh in a way. Granted I have my dot-files on github to make that part easier. But I don't mind running the install command for things as they come up.
I'm curious if there are other benefits to this approach though besides just saving time when setting up a new machine. The article mentioned "you end up with (in my opinion) a much more flexible working environment." Any ideas what they might mean?
Reproducible dev environment. Easier to reproduce some bugs for fixing. More certainty that it doesn't just work on your laptop because you have an undocumented dependency installed. Easier to test the setup process on a clean machine and vary things about the machine setup. To test what happens if you have Python available system-wide vs. if you don't. More precise development history since you have docker-compose.yml under Git, making "time travel" easier.
There's all kinds of little benefits that don't seem that important until you have use for them. Of course Guix and Nix go closer to being actually reproducible, but Docker is better than nothing.
I now go a bit further. I used to keep my dot files around as well but last time I decided to go completely fresh and I learned about powerline10k (more performant than powerline9k) and sdkman (an installer for multiple and different sdk flavours) and had a nice evening to boot. If I hadn’t started over I would just have used the old config and not enjoyed the new benefits.
I use vagrant + ansible to configure my development environment. In the Vagrantfile I specify also the mount of the workspace containing my project. I then edit the code using vscode installed on the host (or vim from inside the box).
amarant|4 years ago
More info: https://nixos.org/
quarantine|4 years ago
doliveira|4 years ago
darkwater|4 years ago
mintyc|4 years ago
They certainly should monetise, but not making it clear is what I object to. I've raised as an issue for clarification in their community wiki.
https://github.com/nix-community/wiki/issues/34
sdfhbdf|4 years ago
adriancr|4 years ago
Just because you are not familiar with his setup doesn't mean it's not at feature parity and more.
Hell, he might even be running intellij in docker if he wishes.
Saying this as I have a similar setup with emacs.
kamac|4 years ago
Alternatively, maybe it’d be possible to have the container expose an IDE over http (possibly vscode through the browser?).
egwor|4 years ago
tyingq|4 years ago
Just be aware that means the musl libc, which is often fine, but not always. Software that expects glibc can crash or have unpredictable behavior. The JVM is a good example, unless you get a JVM that was originally built against musl.
And sometimes also issues with busybox, where it differs from other implementations of the same tools.
codethief|4 years ago
IceWreck|4 years ago
krisgenre|4 years ago
kamac|4 years ago
1. Using a :delegated or :cached flag when using a bind mount can speed it up a bit
2. For folders that need a lot of RW, but don’t need to be shared with the host (think node_modules or sbt cache), I bind a docker volume managed by docker-compose. This makes it extremely fast. Here's an example: https://gist.github.com/kamac/3fbb0548339655d37f3d786de19ae6...
qbasic_forever|4 years ago
huseyinkeles|4 years ago
[0] http://docker-sync.io/
throwaway894345|4 years ago
speedgoose|4 years ago
karlshea|4 years ago
krisgenre|4 years ago
qbasic_forever|4 years ago
But if you do like the idea of docker dev environemnts, check out a tool like batect: https://github.com/batect/batect It's somewhat like if docker-compose had make-like commands you could define. Your whole dev environment and workflow can be defined in a simple yaml config that anyone can use.
kamac|4 years ago
cyberpunk|4 years ago
mintyc|4 years ago
Ashley Broadley's github page at https://github.com/ls12styler sadly doesn't contain a repo with his rust dev work to date (I will ask him as it has some really good stuff in the article.)
----
Very nice. I'm doing similar at the moment. Maybe take a look at
https://www.reddit.com/r/rust/comments/mifrjj/what_extra_dev...
A list of useful cargo built in and third party sub commands.
As you note, common recommended app crates (source) should be gathered separately.
I have several other links and ideas eg supporting different targets such as x86_64-linux-unknown-musl but too long for this post!
domenkozar|4 years ago
nyellin|4 years ago
turbinerneiter|4 years ago
One thing that bugs me is that I can't (or don't know how) get my current state into a Textfile, from which I can reproduce.
It's also not fun for embedded development. Guess what, I need to access USB devices, serial, mass storage, hid - super annoying with this setup.
_ZeD_|4 years ago
medstrom|4 years ago
eptcyka|4 years ago
adamcstephens|4 years ago
zubairq|4 years ago
https://yazz.com/visifile/download.html
adriancr|4 years ago
1. If you want X11 (haven't figured out audio yet)
"-e DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix:ro"
2. Firefox
> --shm-size=2g
and start with: firefox --no-remote
3. Entering container
Just map a command to enter the container with the name as parameter / optional image type as second.
That way you get a new fresh environment whenever you want
Command would just start if it's not started or exec otherwise. I go extra length to have docker start up a ssh server inside container and I just ssh into it.
jraph|4 years ago
swym|4 years ago
Additionally, we use a wrapper script to symlink the current containerised project to the same location in the host system. This ensures, that outputs in the containerised environment points to valid paths in the host:
E. g.: docker mount: /home/me/dev/proj1234 > /workspace
symlink in host: ln -sfn /home/me/dev/proj1234 /workspace
de6u99er|4 years ago
lifeisstillgood|4 years ago
I like the idea of using k8s as suggested upthread. I just have not had much time to push changes / work on it recently. One thing worth thinking about is i have moved to podman - seems a lot slower to start up but is user space which seems sensibke
alanbernstein|4 years ago
linkdd|4 years ago
Unless you have a Linux-based operating system, Docker behaves very poorly.
NB: The Hyper-V backend behave a bit less poorly than the WSL backend.
I've found that Docker Desktop uses a lot of Disk I/O whenever you use volumes, or pull an image, or anything else that touch the hard drive.
zby|4 years ago
There are two parts of the dev environment - the programmer preferences and the project libraries and other infrastructure. What I would like is to have a way to compose those two and ideally something that would work the same way inside a docker container as in a full VM.
qbasic_forever|4 years ago
To provision stuff _inside_ your docker container from ansible I've found packer is the easiest way to do it: https://www.packer.io/docs/provisioners/ansible-local There was apparently a tool called ansible-bender that did something similar but was abandoned. Packer makes it easy to define a container that's provisioned with a local ansible playbook.
Ultimately though I think using ansible with containers is a code smell. If you provision in a container with ansible you have to pull in an entire Python install, and that blows up your container size fast. You can do multi-stage builds and carefully extract the stuff you need but it's a real pain. IMHO minimal shell scripts that run in very tightly defined and controlled environments of the dockerfile (i.e. they don't have to be super flexible or support any and every system, just this container) are the way to go.
mrweasel|4 years ago
Mounting things in the right locations is a nightmare, even minor changes becomes a hassel. For Ansible, just learn to use virtualenvs.
Terraform may be a little better.
conradludgate|4 years ago
1MachineElf|4 years ago
chillpenguin|4 years ago
I'm curious if there are other benefits to this approach though besides just saving time when setting up a new machine. The article mentioned "you end up with (in my opinion) a much more flexible working environment." Any ideas what they might mean?
medstrom|4 years ago
There's all kinds of little benefits that don't seem that important until you have use for them. Of course Guix and Nix go closer to being actually reproducible, but Docker is better than nothing.
spockz|4 years ago
unknown|4 years ago
[deleted]
skrueger|4 years ago
What are the benefits? Are there down sides to being operating in the docker container for everything?
mark_and_sweep|4 years ago
ximm|4 years ago
3v1n0|4 years ago
Can do the same but having access to host easier and so to hw devices.
Moving it around my config is easy as having dotfiles around
eeZah7Ux|4 years ago
If you really need a "container", debootstrap + systemd-nspawn does the job and provides much better sandboxing with 10x less complexity.
You don't need Docker or Nix.
yingliu4203|4 years ago
emptysongglass|4 years ago
nicolimo86|4 years ago