I am not sure that this belongs on the front page of HN, but the problem seems to stem from the `virtio-blk` driver they're using not supporting TRIM, which causes space to never deallocate once the VM writes to a block.
Switching to `virtio-scsi` and sending regular TRIMs could fix this issue. It looks like they're also allowing people to configure the maximum size of the qcow2 image, which puts a hard upper bound on how much space the VM will take.
(I'm a Docker employee working on this very issue. I guess my plan to take a break by reading HN failed!)
You're completely right -- it's a problem caused by lack of TRIM in the storage path. In the next beta of Docker for Windows (beta 31 due today hopefully) TRIM should be enabled. The Mac will take a little longer as we need to switch protocols and do more work on the host side -- unfortunately the default Apple filesystem doesn't support sparse files so we can't "cheat" by simply passing the TRIM down to the filesystem layer. We'll probably need some kind of explicit block-level compaction to shuffle blocks from the end of the file into holes that have been created by TRIM.
Everything old is new again. I posted a Docker issue about the default storage driver (DeviceMapper) not correctly freeing unused space nearly three years ago: https://github.com/docker/docker/issues/3182
From what I can tell, that issue was never truly resolved. The advice is like the adage about finding romance, "Have enough storage, don't run out of storage." As long as you follow that advice, you will never have a problem.
Oh, you installed Docker on a cloud provider with limited local disk, or on your laptop? Silly you for thinking that a finite amount of storage and the default configuration of Docker was adequate.
Same here. I've also bumped into various storage issues with docker. Really drove me mad a couple of times... And like parent stated, the Docker docs about these issues reminded me of the XFS faq. Which stated about data loss for unclean shutdown (powerloss, panic): "If you know it hurts don't do it". In the end docker is just not as robust as for example SmartOS / Triton with ZFS.
I've given up with Docker on mac. It's a complete joke that they even consider it out of beta. The windows version is faster but doesn't handle file change events yet (or something like that). I just said screw it and switched my dev machine over to Ubuntu.
I haven't given up on Docker for Mac yet, but I could not agree more about them taking it out of beta. It is not ready for primetime yet. The shared volume driver, osxfs, is complete garbage in terms of performance. I use this solution as what I hope to be a temporary work around: https://github.com/EugenMayer/docker-sync.
Agreed. Forget usage leaks, Docker for Mac doesn't even work on my system, and five days later after reporting it, nobody cares. It used to work, many versions ago, but they've somehow broken it entirely between redesigning the UI 2-3 times and switching from VirtualBox (which worked) to OSX Virtualization (which so far doesn't). See https://github.com/docker/for-mac/issues/984
I have contributed to Docker since very early days but am frankly not using it now because the project has for all intents and purposes entered a toxic-to-the-community stage where hype and marketing exceed capacity to resolve issues, total fails are occurring for me on all platforms, and AFAIK nobody in their over-funded San Franciscan office bubble seems to care. It's a typically arrogant startup: not listening to users, heading for a fall.
It's like they've decided to re-invent downloads (badly), re-invent cross-platform hypervisors (badly), re-invent orchestration (badly), re-invent storage (badly) and roll it all up in branded glue. I can't help but wonder if an approach with broader applicability and longevity would segregate the OS (anything container or VM-like at any layer, from Erlang to BSD jails to diskless clusters), and environment paradigm (container-based, paravirtualization-based, bare metal) from the workload and truly enable infrastructure agnoticism by removing the dependency on a single shifting-sands component, and allowing people to A/B test identical workloads on disparate paradigm infrastructures. I tried a few years ago, it worked: http://stani.sh/walter/cims
No kidding, same experience here with docker for mac. Days wasted chasing down problems with a completely un-usable tool. Literally full stop critical bugs that are clear as day yet they release it and call it 1.0? Docker is ruining their reputation with this sort of garbage.
I suppose this is better than when the disk usage wouldn't grow at all.
It's not all that terrible to manage this yourself under light-to-medium usage, but if you're constantly experimenting with new machines and run into this more than once a week I'd say it's time to use docker remotely.
[+] [-] manacit|9 years ago|reply
Switching to `virtio-scsi` and sending regular TRIMs could fix this issue. It looks like they're also allowing people to configure the maximum size of the qcow2 image, which puts a hard upper bound on how much space the VM will take.
[+] [-] djs55|9 years ago|reply
You're completely right -- it's a problem caused by lack of TRIM in the storage path. In the next beta of Docker for Windows (beta 31 due today hopefully) TRIM should be enabled. The Mac will take a little longer as we need to switch protocols and do more work on the host side -- unfortunately the default Apple filesystem doesn't support sparse files so we can't "cheat" by simply passing the TRIM down to the filesystem layer. We'll probably need some kind of explicit block-level compaction to shuffle blocks from the end of the file into holes that have been created by TRIM.
[+] [-] zitterbewegung|9 years ago|reply
[+] [-] AaronFriel|9 years ago|reply
From what I can tell, that issue was never truly resolved. The advice is like the adage about finding romance, "Have enough storage, don't run out of storage." As long as you follow that advice, you will never have a problem.
Oh, you installed Docker on a cloud provider with limited local disk, or on your laptop? Silly you for thinking that a finite amount of storage and the default configuration of Docker was adequate.
[+] [-] jsiepkes|9 years ago|reply
[+] [-] dawnerd|9 years ago|reply
[+] [-] mrinterweb|9 years ago|reply
[+] [-] contingencies|9 years ago|reply
In addition, they don't seem to care that the whole thing is useless in China - https://github.com/docker/docker/issues/28791 - also after emailing, no response.
I have contributed to Docker since very early days but am frankly not using it now because the project has for all intents and purposes entered a toxic-to-the-community stage where hype and marketing exceed capacity to resolve issues, total fails are occurring for me on all platforms, and AFAIK nobody in their over-funded San Franciscan office bubble seems to care. It's a typically arrogant startup: not listening to users, heading for a fall.
It's like they've decided to re-invent downloads (badly), re-invent cross-platform hypervisors (badly), re-invent orchestration (badly), re-invent storage (badly) and roll it all up in branded glue. I can't help but wonder if an approach with broader applicability and longevity would segregate the OS (anything container or VM-like at any layer, from Erlang to BSD jails to diskless clusters), and environment paradigm (container-based, paravirtualization-based, bare metal) from the workload and truly enable infrastructure agnoticism by removing the dependency on a single shifting-sands component, and allowing people to A/B test identical workloads on disparate paradigm infrastructures. I tried a few years ago, it worked: http://stani.sh/walter/cims
[+] [-] ledgerdev|9 years ago|reply
[+] [-] PopsiclePete|9 years ago|reply
[+] [-] Steeeve|9 years ago|reply
It's not all that terrible to manage this yourself under light-to-medium usage, but if you're constantly experimenting with new machines and run into this more than once a week I'd say it's time to use docker remotely.
[+] [-] seaghost|9 years ago|reply
[+] [-] joesmo|9 years ago|reply