A fun idea might be to combine something like this with Tailscale & their Mullvad add-on, so you get ephemeral browsing environments with VPN connectivity, could make it easy to test from various countries simultaneously on a single host.
Worth mentioning Jess Frazelle was running desktop applications in docker a while ago. Not a full desktop, but also quicker to rebuild individual apps.
I've been running stuff in LXC for ages (and before that, custom chroots). A while ago I made the switch to Wayland - and now started moving things over to podman, which has the added benefit of being able to share the stuff easily:
I use two different setups - on some systems I only run things like browsers in conatainers, on others I also run the desktop itself in a container. Not published yet are my helper scripts, that'll need some more cleaning up.
On Windows, doesn't this technically mean OP is running Linux inside a Linux VM inside Windows? From what I understand Docker is Linux tech and to use it anywhere else a (small) Linux VM is required. If true, I would just dispense with the extra layer and just run a Linux VM. Not to discourage experimentation though!
For one thing, Docker is not really "Linux inside Linux". It uses Linux kernel features to isolate the processes inside a container from those outside. But there is only one Linux kernel which is shared by both the container and its host (within the Linux VM, in this case).
For another, running Linux containers in a Linux VM on Windows is one (common) way that Docker can work. But it also supports running Windows containers on Windows, and in that case, the Windows kernel is shared just like in the Linux case. So Docker is not exactly "Linux tech".
I desperately wish I could run docker properly (CLI) on the Mac rather than use docker desktop, and while we are making a dream list, can I just run Ubuntu on the Mac mini?
I develop my apps in the most possible native way I can: deb packages, apt repo, systemd, journald etc. however I would like to also be able to run it in docker/vm. Is there a good systemd-in-docker solution for this to basically not run anything differently and not have to maintain two sets of systems?
Have you looked at systemd-nspawn[0]? Its not docker so it wouldn't be useful for writing Dockerfiles but it is light containers that work beautifully with systemd.
Containers with systemd as an init process are considered first-class citizen by the Podman ecosystem (the base images are named accordingly: e.g, ubi10-init vs ubi10)
You could use Nix to build the package and provide a nixos module and a docker image from the same derivation. Now you only have to manage three systems instead of two. /s
WSL doesn't have an X Server, it has a Wayland compositor. That said, yes, you can use that. You can even run a different compositor nested so you get one single window with a desktop if you want.
Samsung DEX had a Linux desktop package in 2018. It was a lxd container based on Ubuntu 16.04. They developed it in collaboration with Canonical. Unfortunately they deprecated it shortly after, maybe already in 2018. The next Android update would remove it.
It worked but Android killed it mercilessly if it used too much memory or the rest of the system needed it.
Some current Android devices that have USB-C 3.1+ and support dp-alt-mode (USB-C to HDMI) will detect when an external display is connected and provide a full extended desktop. [0]
You can connect mouse, keyboard, and display to the Android device through an unpowered USB-C hub that offers the respective ports. Battery life depends on the make/model of Android device.
I have a Motorola phone and the experience is very nice.
I did a similar thing some years ago, when trying to hack my own cloud gaming setup by using AWS GPU Linux instances. While it worked the price per hour wasn't worth it compared to just buying a good GPU.
My idea was very similar, using TigerVNC and just launching Steam without a WM. Unfortunately I think I lost the code for it
> The usability was surprisingly decent. LibreOffice and GIMP worked fine, although there was a bit of a lag. I would estimate about 70% of native performance, but still very usable.
You can get better performance and lower latency for your remote desktop and eliminate lag by using a gaming-focused remote desktop server, e.g. using Sunshine: https://github.com/LizardByte/Sunshine .
This is a much less efficient way of running Linux GUI apps over WSL since it will use software rendering.
WSL2 provides a GPU accelerated Wayland server. If your Mesa build (ver > 22) has d3d12 drivers you can use Windows DirectX as your OpenGL driver. Combined with the WSLg Wayland server you get near native desktop performance out of GUI apps.
This article is bad in a lot of ways, but all of sudden it jumps to running Linux in a browser:
“After my failed custom image attempt, I found a promising XFCE-based Debian image on Docker Hub. I downloaded it in minutes and, with a few commands, launched it. When I opened the URL, I was greeted by a fully functional Linux desktop, running right in my browser. The pure geek joy of seeing a complete OS served from inside a Docker container was a feeling I won’t forget. It worked”
And on Windows, docker for Linux containers is just a VM, so this part isn’t really true:
“Containers are not isolated Operating Systems: Docker containers share the host kernel. This is what makes them lightweight and great for single services. Whereas desktop environments expect system services like (systemd, logind, udev, DBus) and device access to be available. Containers don’t provide that by default
You know, it's funny—I always hear people say they want to keep their Windows-only applications and run Linux alongside it, but I made the switch almost a decade ago and honestly can't say I'm worse off for it. And frankly, there's never been a better time to make that leap; the Linux desktop has finally hit its stride and become genuinely mature, with the polish and features one would expect from a modern operating system.
Apart from a handful of games, I haven't actually needed Windows for anything. So I'm curious—what Windows-only software is keeping you on it, OP?
Not OP, outside of games I keep a dual boot pretty much exclusively for Visual Studio - imo it's one of the best debuggers I've ever used. I know gdb is just as powerful if not more, but it is so much less ergonomic, even with the different frontends I've tried. Edit and continue for C/C++ is such a killer feature too. I stick away from msbuild or using it as an editors it's purely my debugher.
Granted it helps that a lot of the time I need "advanced" debugging relates to msvc/windows specific issues, which while I could run it in wine, it's just easier if I'm on windows anyway.
I’m not OP but for me I end up having trouble with games and maintaining dual boot for it isn’t worth it. Most recently I was trying to install gamescope on PopOS LTS for retro gaming, but it was too old of a distribution for gamescopes dependencies, so I upgraded to Cosmic and it broke my software KVM. I use PopOS because it has great NVIDIA support and I’ve run into issues before with other distros.
At that point I switched back to windows but I’ll try again after a few months. I always keep trying.
I think if I didn’t play games I’d be fine with Linux. I hate Windows except that everything just works.
I absolutely love the direction KDE Plasma Wayland session is headed; I think it looks great, it definitely runs great, and it really is just packed with features. I do have some personal KDE gripes I'd like to work on, mainly just improving the KIO fuse integration more, but wow have things progressed fast.
Still, I caution people to not just jump to Linux. The actual very first problem is not software. It's hardware. Firstly, running cutting edge motherboards and GPUs requires a more bleeding edge setup than typical LTS distros give you; you'll be missing audio codec drivers and the GPU drivers will be missing important updates, if things even boot. Secondly, NVIDIA GPUs are currently in a weird place, leaving users with trade-offs no matter what choices they make, making it hard to recommend Linux to the vast majority of users with NVIDIA GPUs. Thirdly, and this one is extremely important, nobody, Nobody, should EVER recommend people run Linux on random Windows laptops. This is cruel and unusual punishment, and it's a bad idea. It's a bad idea even if Arch wiki says it works pretty good. It's a bad idea even if a similar SKU works well, or hell, even if a similar SKU ships with Linux out of the box. It's just a bad idea. The only two big vendors that even really do a good job here are System76 and Framework, and they still have to use a bunch of components from vendors that DGAF about desktop Linux. It is impressive that you can more or less run whatever desktop hardware and things usually work OK, but this logic doesn't apply to laptops. This point can't be stressed enough. I have extensive experience with people trying to switch from Windows to Linux and it's genuinely a challenge to explain to people how this doesn't work, they don't have the frame of reference to understand just how bad of an idea it is and learning the hard way will make them hate Linux for no reason.
Still, even with good hardware, there's plenty of software woes. You'll be missing VSTs. You might have to switch to Davinci Resolve to edit video, Krita to do digital painting, and Blender to do... Well, a lot of stuff. All good software, but a very non-trivial switch.
I'm really glad to see a lot more people interested in running Linux and I hope they have a good experience, but it's worse if they have inflated expectations of what they can do and still have a good experience with. Being misleading about how well Linux or WINE will work for a given task has never really helped the cause, and probably hurt it a lot.
I won't argue about Proton/Steam, though, that shit's miraculous. But honestly, a lot of people like playing competitive multiplayer games, and those anti-cheat vendors don't give a damn about your Linux, they're thrilled to start integrating Secure Boot with TPM attestation as it lets them try to milk more time out of the "maybe we can just secure the client" mindset. (I personally think it's going to die soon, in a world where ML has advanced to the point where we can probably do realtime aimbots that require absolutely no tampering with the client or the computer it runs on, but we'll see I guess.) But for me who doesn't care, yep, it's pretty good stuff. Whenever there's a hot new thing out chances are it already works on Proton; been playing a lot of Peak which was somewhat popular in the last couple months.
aiui "distrobox" is built to support these setups and experimentation, even more readily, including defaults to support:
> The created container will be tightly integrated with the host, allowing sharing of the HOME directory of the user, external storage, external USB devices and graphical apps (X11/Wayland), and audio.
> Why
* Provide a mutable environment on an immutable OS, like ChromeOS, Fedora Silverblue, OpenSUSE Aeon/Kalpa, or SteamOS3 ...
* Provide a locally privileged environment for sudoless setups (eg. company-provided laptops, security reasons, etc…)
* To mix and match a stable base system (eg. Debian Stable, Ubuntu LTS, RedHat) with a bleeding-edge environment for development or gaming (eg. Arch, OpenSUSE Tumbleweed, or Fedora with the latest Mesa)
* Leverage a high abundance of curated distro images for docker/podman to manage multiple environments.
> Aims
This project aims to bring any distro userland to any other distro supporting podman, docker, or lilipod. It has been written in POSIX shell to be as portable as possible and it does not have problems with dependencies and glibc version’s compatibility.
> It also aims to enter the container as fast as possible, every millisecond adds up if you use the container as your default environment for your terminal:
> Security implications
Isolation and sandboxing are not the main aims of the project, on the contrary it aims to tightly integrate the container with the host. The container will have complete access to your home, pen drive, and so on, so do not expect it to be highly sandboxed like a plain docker/podman container or a Flatpak.
distrobox create -n test
> Create a new distrobox with Systemd (acts similar to an LXC):
distrobox create --name test --init --image debian:latest --additional-packages "systemd libpam-systemd pipewire-audio-client-libraries"
distrobox enter test
Unfortunately it looks like sandbox mode [0] is not a goal, so it doesn't solve the main problem I have - running semi-trusted apps (e.g. Android Studio) and minimising their impact. Currently I just share X11 socket and run it in Docker.
> Containers are not isolated Operating Systems: Docker containers share the host kernel. This is what makes them lightweight and great for single services. Whereas desktop environments expect system services like (systemd, logind, udev, DBus) and device access to be available. Containers don’t provide that by default.
I thought Docker images always run in a VM on non Linux systems, no? This guy is running Windows on host, right? Confusing
These images are top-notch, well documented, and have recently been refactored to use Selkies under the hood. Even with gamepad support, I’ve used these for running DOSbox, RetroArch, streaming video, and many other things.
There’s even a mature extensibility layer for using their images as a base layer to add services and apps.
Can’t speak highly enough of the linuxserver.io folks.
I actually wanted to try webtop for a long time, and did it only recently. I could not figure out these selkies for the life of me. It wanted a bunch of ports, was complaining about something all the time (don't remember, it's been a month). Might be a skill issue, but I've been using docker for the past 10 years.
Moreover they want root access to the host system, which kind of defies one of the reasons to use containers. Is there a video that explains the benefits in a good way? I mean, if it's for gaming only, I can understand the use-case, but say you just want to run something like Gnu Radio in a container, why would I need 60 fps and root permissions for that.
I run full-headed Puppeteer sessions in Docker, with VNC for debugging and observation. I keep the instances as light as I can, but I suspect I'm most of the way there toward a "full" desktop experience. Probably just need to add a more full-featured window manager (currently I use fluxbox)
> This entire experiment took place on a Windows 10 PC, driven by a specific question: what if you could have the best of both worlds? The idea of a full Linux environment running inside a Docker container, side-by-side with my standard Windows applications, was too intriguing to pass up. No reboots, no separate partitions—just a seamless, containerized Linux desktop.
> No reboots, no separate partitions—just a seamless, containerized Linux desktop.
> Windows 10 PC .... No reboots
You're kidding right? Linux is no reboots, Windows... Well let's just say yet another reason I no longer daily Windows.
These days I have a Docker container with Remmina that I use as a bastion (fronted by Cloudflare and Authelia for OIDC), but everything else is LXC with xrdp and hardware acceleration for both application rendering and desktop streaming (xorgxrdp-glamor is much, MUCH better than VNC).
I am, however, struggling to find a good way to stream Wayland desktops over RDP.
[+] [-] raesene9|6 months ago|reply
[+] [-] nijave|6 months ago|reply
[+] [-] francis-io|6 months ago|reply
https://blog.jessfraz.com/post/docker-containers-on-the-desk... https://github.com/jessfraz/dockerfiles
[+] [-] finaard|6 months ago|reply
https://github.com/aard-fi/tumbleweed-images/tree/master/way...
I use two different setups - on some systems I only run things like browsers in conatainers, on others I also run the desktop itself in a container. Not published yet are my helper scripts, that'll need some more cleaning up.
[+] [-] treve|6 months ago|reply
[+] [-] teraflop|6 months ago|reply
For one thing, Docker is not really "Linux inside Linux". It uses Linux kernel features to isolate the processes inside a container from those outside. But there is only one Linux kernel which is shared by both the container and its host (within the Linux VM, in this case).
For another, running Linux containers in a Linux VM on Windows is one (common) way that Docker can work. But it also supports running Windows containers on Windows, and in that case, the Windows kernel is shared just like in the Linux case. So Docker is not exactly "Linux tech".
[+] [-] lostlogin|6 months ago|reply
I desperately wish I could run docker properly (CLI) on the Mac rather than use docker desktop, and while we are making a dream list, can I just run Ubuntu on the Mac mini?
[+] [-] PeterStuer|6 months ago|reply
[+] [-] BrenBarn|6 months ago|reply
[+] [-] k_bx|6 months ago|reply
[+] [-] craftkiller|6 months ago|reply
[0] https://wiki.archlinux.org/title/Systemd-nspawn
[+] [-] throwaway74354|6 months ago|reply
[+] [-] seabrookmx|6 months ago|reply
[+] [-] cpuguy83|6 months ago|reply
Build system packages and containers from those packages for a given target distro.
Behind the scenes it uses buildkit, so it's no extra stuff you need, just docker (or any buildkit daemon).
[+] [-] nothrabannosir|6 months ago|reply
[+] [-] swiftcoder|6 months ago|reply
Seems very inefficient to have to render everything through the browser
[+] [-] Cu3PO42|6 months ago|reply
[+] [-] akikoo|6 months ago|reply
https://akik.kapsi.fi/rocky/
The desktop is accessed locally and not via a network connection and it's running under Xwayland.
[+] [-] giancarlostoro|6 months ago|reply
[+] [-] pmontra|6 months ago|reply
It worked but Android killed it mercilessly if it used too much memory or the rest of the system needed it.
[+] [-] heresie-dabord|6 months ago|reply
You can connect mouse, keyboard, and display to the Android device through an unpowered USB-C hub that offers the respective ports. Battery life depends on the make/model of Android device.
I have a Motorola phone and the experience is very nice.
[0] _ https://uperfect.com/blogs/wikimonitor/list-of-smartphones-w...
[+] [-] asabla|6 months ago|reply
Kinda hope they revisit this idea in a near future again
[+] [-] happyman|6 months ago|reply
My clients are a rpi 4 and an older ipad. Sometimes use an Android phone as well.Works really well.
[+] [-] ponsfrilus|6 months ago|reply
[+] [-] augusto-moura|6 months ago|reply
My idea was very similar, using TigerVNC and just launching Steam without a WM. Unfortunately I think I lost the code for it
[+] [-] neurostimulant|6 months ago|reply
You can get better performance and lower latency for your remote desktop and eliminate lag by using a gaming-focused remote desktop server, e.g. using Sunshine: https://github.com/LizardByte/Sunshine .
You will need to give it access to a gpu though.
[+] [-] hippospark|6 months ago|reply
[+] [-] smitty1e|6 months ago|reply
WINDOWS_IP=$(ip route | awk '/^default/ {print $3}')
DISPLAY="$WINDOWS_IP:0"
Now I can use the mighty mobaxterm from https://www.mobatek.net to just run whatever and pipe it back to Windows.
One caveat is that the $PATH gets polluted with space characters by 'Doze, so I have to do something like this for QGIS:
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin qgis -n &
[+] [-] uxcolumbo|6 months ago|reply
What are your use cases? To run Linux GUI apps?
Does mobaxterm allow you to view those GUI apps?
[+] [-] okanat|6 months ago|reply
WSL2 provides a GPU accelerated Wayland server. If your Mesa build (ver > 22) has d3d12 drivers you can use Windows DirectX as your OpenGL driver. Combined with the WSLg Wayland server you get near native desktop performance out of GUI apps.
[+] [-] ec109685|6 months ago|reply
“After my failed custom image attempt, I found a promising XFCE-based Debian image on Docker Hub. I downloaded it in minutes and, with a few commands, launched it. When I opened the URL, I was greeted by a fully functional Linux desktop, running right in my browser. The pure geek joy of seeing a complete OS served from inside a Docker container was a feeling I won’t forget. It worked”
And on Windows, docker for Linux containers is just a VM, so this part isn’t really true:
“Containers are not isolated Operating Systems: Docker containers share the host kernel. This is what makes them lightweight and great for single services. Whereas desktop environments expect system services like (systemd, logind, udev, DBus) and device access to be available. Containers don’t provide that by default
[+] [-] arcfour|6 months ago|reply
Apart from a handful of games, I haven't actually needed Windows for anything. So I'm curious—what Windows-only software is keeping you on it, OP?
[+] [-] apple1417|6 months ago|reply
Granted it helps that a lot of the time I need "advanced" debugging relates to msvc/windows specific issues, which while I could run it in wine, it's just easier if I'm on windows anyway.
[+] [-] j4coh|6 months ago|reply
At that point I switched back to windows but I’ll try again after a few months. I always keep trying.
I think if I didn’t play games I’d be fine with Linux. I hate Windows except that everything just works.
[+] [-] ireadmevs|6 months ago|reply
But for everything else I’m on linux as well.
[+] [-] jchw|6 months ago|reply
Still, I caution people to not just jump to Linux. The actual very first problem is not software. It's hardware. Firstly, running cutting edge motherboards and GPUs requires a more bleeding edge setup than typical LTS distros give you; you'll be missing audio codec drivers and the GPU drivers will be missing important updates, if things even boot. Secondly, NVIDIA GPUs are currently in a weird place, leaving users with trade-offs no matter what choices they make, making it hard to recommend Linux to the vast majority of users with NVIDIA GPUs. Thirdly, and this one is extremely important, nobody, Nobody, should EVER recommend people run Linux on random Windows laptops. This is cruel and unusual punishment, and it's a bad idea. It's a bad idea even if Arch wiki says it works pretty good. It's a bad idea even if a similar SKU works well, or hell, even if a similar SKU ships with Linux out of the box. It's just a bad idea. The only two big vendors that even really do a good job here are System76 and Framework, and they still have to use a bunch of components from vendors that DGAF about desktop Linux. It is impressive that you can more or less run whatever desktop hardware and things usually work OK, but this logic doesn't apply to laptops. This point can't be stressed enough. I have extensive experience with people trying to switch from Windows to Linux and it's genuinely a challenge to explain to people how this doesn't work, they don't have the frame of reference to understand just how bad of an idea it is and learning the hard way will make them hate Linux for no reason.
Still, even with good hardware, there's plenty of software woes. You'll be missing VSTs. You might have to switch to Davinci Resolve to edit video, Krita to do digital painting, and Blender to do... Well, a lot of stuff. All good software, but a very non-trivial switch.
I'm really glad to see a lot more people interested in running Linux and I hope they have a good experience, but it's worse if they have inflated expectations of what they can do and still have a good experience with. Being misleading about how well Linux or WINE will work for a given task has never really helped the cause, and probably hurt it a lot.
I won't argue about Proton/Steam, though, that shit's miraculous. But honestly, a lot of people like playing competitive multiplayer games, and those anti-cheat vendors don't give a damn about your Linux, they're thrilled to start integrating Secure Boot with TPM attestation as it lets them try to milk more time out of the "maybe we can just secure the client" mindset. (I personally think it's going to die soon, in a world where ML has advanced to the point where we can probably do realtime aimbots that require absolutely no tampering with the client or the computer it runs on, but we'll see I guess.) But for me who doesn't care, yep, it's pretty good stuff. Whenever there's a hot new thing out chances are it already works on Proton; been playing a lot of Peak which was somewhat popular in the last couple months.
[+] [-] mcint|6 months ago|reply
> The created container will be tightly integrated with the host, allowing sharing of the HOME directory of the user, external storage, external USB devices and graphical apps (X11/Wayland), and audio.
https://distrobox.it/
> Why * Provide a mutable environment on an immutable OS, like ChromeOS, Fedora Silverblue, OpenSUSE Aeon/Kalpa, or SteamOS3 ... * Provide a locally privileged environment for sudoless setups (eg. company-provided laptops, security reasons, etc…) * To mix and match a stable base system (eg. Debian Stable, Ubuntu LTS, RedHat) with a bleeding-edge environment for development or gaming (eg. Arch, OpenSUSE Tumbleweed, or Fedora with the latest Mesa) * Leverage a high abundance of curated distro images for docker/podman to manage multiple environments.
> Aims This project aims to bring any distro userland to any other distro supporting podman, docker, or lilipod. It has been written in POSIX shell to be as portable as possible and it does not have problems with dependencies and glibc version’s compatibility.
> It also aims to enter the container as fast as possible, every millisecond adds up if you use the container as your default environment for your terminal:
> Security implications Isolation and sandboxing are not the main aims of the project, on the contrary it aims to tightly integrate the container with the host. The container will have complete access to your home, pen drive, and so on, so do not expect it to be highly sandboxed like a plain docker/podman container or a Flatpak.
> Create a new distrobox with Systemd (acts similar to an LXC): I learned about it from the KDE wiki, thank you jriddell for leaving that nugget https://community.kde.org/Neon/Containers[+] [-] bornfreddy|6 months ago|reply
Unfortunately it looks like sandbox mode [0] is not a goal, so it doesn't solve the main problem I have - running semi-trusted apps (e.g. Android Studio) and minimising their impact. Currently I just share X11 socket and run it in Docker.
[0] https://github.com/89luca89/distrobox/issues/28
[+] [-] throwaway290|6 months ago|reply
I thought Docker images always run in a VM on non Linux systems, no? This guy is running Windows on host, right? Confusing
[+] [-] cpuguy83|6 months ago|reply
[+] [-] thevinchi|6 months ago|reply
https://docs.linuxserver.io/images/docker-webtop/
These images are top-notch, well documented, and have recently been refactored to use Selkies under the hood. Even with gamepad support, I’ve used these for running DOSbox, RetroArch, streaming video, and many other things.
There’s even a mature extensibility layer for using their images as a base layer to add services and apps.
Can’t speak highly enough of the linuxserver.io folks.
[+] [-] MutableLambda|6 months ago|reply
[+] [-] hauxir|6 months ago|reply
This allows me to sandbox my whole environment so i can run it exactly the same no matter the machine, including all packages and editor plugins.
If anyone wants to check it out it's at https://github.com/hauxir/dotfiles
the devenv.sh file describes my entire environment.
[+] [-] bdcravens|6 months ago|reply
[+] [-] giancarlostoro|6 months ago|reply
> No reboots, no separate partitions—just a seamless, containerized Linux desktop.
> Windows 10 PC .... No reboots
You're kidding right? Linux is no reboots, Windows... Well let's just say yet another reason I no longer daily Windows.
[+] [-] rcarmo|6 months ago|reply
These days I have a Docker container with Remmina that I use as a bastion (fronted by Cloudflare and Authelia for OIDC), but everything else is LXC with xrdp and hardware acceleration for both application rendering and desktop streaming (xorgxrdp-glamor is much, MUCH better than VNC).
I am, however, struggling to find a good way to stream Wayland desktops over RDP.