top | item 33444808

(no title)

buscoquadnary | 3 years ago

Isn't that what the promise of Docker was, you can distribute the docker image and everyone is running and building the same thing?

discuss

order

quickthrower2|3 years ago

Yeah but what is the debugging experience like? Does that work? I think it adds a layer of complication. Also images can take a while to build. Although most would be cached hopefully with just copying your code changing. Maybe a snappy startup can fix?

KronisLV|3 years ago

> Yeah but what is the debugging experience like? Does that work?

Sort of. In certain stacks, you essentially set up remote debugging like you would for an app running in a remote environment (which is your local container with an exposed port) and your IDE just works.

It's relatively carefree when it works as expected, but a bit of a pain to set up sometimes. Admittedly, something like CPU flame graphs or tools like VisualVM that let you easily select from locally running Java processes to instrument might be harder to work with.

But even then, you can have issues with file system permissions and any bind mounts that you might need (e.g. files in a PHP container, where you want to keep developing and testing your app after page reloads, without rebuilding the entire container).

I wrote a bit more about it here: https://blog.kronis.dev/everything%20is%20broken/containers-...

I'm still a proponent of using containers for making applications more consistently managed (configuration, resource limits, port bindings, storage), self-contained (dependencies, running different/multiple versions in parallel) and easier to launch (e.g. Docker Compose file or a fancier variation of YAML instead of Ansible + systemd services), but they definitely can be a leaky abstraction if you don't have *nix as your development machine OS.

drunkenmagician|3 years ago

I would have thought so too ... ¯\_(ツ)_/¯