Having standardized components like these is pretty valuable and reusable.
One could definitely write them on their own.
It’s also another way to discover other packages in these categories.
Some of the LXC container settings can be a little specific and these scripts manage those we’ll most of the time with a standard install, or an advanced one is available too.
I think the balance is closer to 80% shiny, 20% working code.
There's definitely some cool stuff here (disabling the nag screen, thank you!) but I really dislike the way it's presented.
I don't like the curl|bash very much, but mostly I think this would be far more interesting as a collection of one-liners to accomplish each task vs the current form of a curl|bash with dialogs and prompts. I think that's better for security and better for learning you should know what commands you're executing!
I really wish proxmox had sort of first-class docker/podman integration.
With docker it seems a Dockerfile is a self-contained recipe. You build it, then you run the container.
with proxmox, you sort of have to be a sysadmin. The proxmox UI helps you define the characteristics of the container, but then you have to put everything inside the container yourself without help or reproducibility.
Proxmox is awesome. I use it for my router and a bunch of servers. [1] it is so nice to be able to take a snapshot, update opnsense and go back if something fails.
It's working so well I now have too many vms and need to get a bigger disk.
It's a product instead of a box of parts. My brain only has so much cognitive power. If I'm spinning up a VM, I'm trying to accomplish some other work.
For me who uses it as a NAS/VM platform, it’s basically Debian with some virtualization relevant packages and an opt in kernel that are more reasonably up to date, with solid ZFS support out of the box so I don't care about the "update kernel, reboot, oops version mismatch I need to revert" nonsense.
While things like TrueNAS exist with ZFS support, in the past iX systems has made some overeager patches to ZFS, to bring in new features that weren't appropriate to include yet IMO. I hate to dig up old news, but that's why I've avoided them so far.
Basically I've got Proxmox "figured out", which lets me fuck around with the things I want, rather than the things I don’t want. This reduces my very subjective mental overhead.
All the high availability stuff is wasted on me though, and I just disable the related services.
Note to new users experimenting with proxmox
The most common "gotcha" with proxmox, is that it does NOT automatically adjust network stuff after install.
If you switch out a network card or change what physical port it's connected with, you will lose network access, and need to manually edit `/etc/network/interfaces` to make sure your using a correct port ID and/or assigned IP are valid.
iface vmbr0 inet static
...
bridge-ports <The port ID, like "enp130s0f0">
...
With proxmox, I click the plus sign, pick an iso image, cpu/mem, and i'm good to go and I have a remote console viewer and tons of other neat mgmt stuff.
In the same amount of time, if I were diving in to DIYing it, I'd probably be about 5 minutes into reading the arch linux wiki page for KVM/qemu.
Me either, although I am using it for my home. It does allow configuring VMs, containers, logical volumes, and such. I think you can have multiple proxmox in a cluster. I might not have reached the point where I am doing enough with it to understand the benefit.
I was in a similar boat and finally got around to trying out Proxmox in the last year or so. I just wanted share my own experience here, since it's a bit different than the "standard" uses I've seen.
I had been running a k3s cluster (k8s flavor) on some Raspberry Pis, but decided I needed some non-ARM nodes, and "invested" in a few low power "1L" AMD64 PCs (6-8 core + hyper-threading). I was initially going to just install Ubuntu and base my setup off my existing Ansible automation to make things less inefficient. But I figured I'd play around with Proxmox first and see if there was any benefit to using that as a base layer since I'd heard a lot about it.
I'm so glad I did. I ended up learning quite a bit in the process. Some quick highlights about using Proxmox for VMs in general:
* Proxmox supports creating a "cluster", so you can login though one machine to administer them all. You can conveniently "move" VMs between machines pretty seamlessly.
* If you install the para-virtualization drivers for e.g. Windows or Ubuntu VMs, you can do pretty fast remote KVM. E.g. I could run Youtube on a Windows VM in my basement over "Spice" and it almost looks like it's running locally. (Not that it's a use case I care about, mostly just shows the fact it's performant.)
In terms of actually getting around to deploying k3s on top of the infra:
* I ended up learning HPC-Packer, and HPC-Terraform, which integrated nicely with my existing Ansible experience.
* Packer turns an Ubuntu ISO + my "base" Ansible setup playbooks into a pre-baked machine template directly in Proxmox. (My local machine's Packer binary just orchestrates the process.)
* Terraform deploys the machine template into the Proxmox cluster. Basically a config file of machine names + IPs + mac adresses, and a few other params and initial setup.
* Ansible then installs any final dependencies (anything not in the base template), setups up the first k3s master, grabs the join token, and adds in 2 more master nodes for a proper `etcd` backend.
* Ansible then installs my base Kubernetes services (cert-manager, Rancher, Longhron storage, etc) via running helm commands on one of the nodes.
* This is where I'm at now; the next step is for me to deploy my existing Flux.cd-automated "Gitops" apps (built for ARM64+AMD64 via Gitlab runner, also in Proxmox). These _had_ been running on my now-quite-crusty-seeming Pi cluster.
I can run a single command to delete all the VMs, and rebuild + setup everything (full HA cluster + apps deployed and running) from scratch in ~6 minutes without any manual input required from me, just a few secrets/params in a config file.
This has made exploring the horizon of possibilities _so much easier_ without getting locked in; I can try to weird Longhorn storage configs, or try out k8s monitoring stacks without worrying about needing to "back out" my changes if I picked bad settings. (Just blow it up and try again!) I can change how VLANs are configured in early steps, or try adding a library to the base Ubuntu install cluster-wide super easily, etc.
I am primarily a software-engineer, so it has been really nice to delve into the operational side of things, and get a proper reproducible setup. It really has transformed how I think about the cluster in that it's no longer a "thing to carefully maintain", but instead a great sandbox to explore AND deploy my own k8s applications on top of without playing cloud bills.
My Proxmox journey in the past few months definitely turned into more than a rabbit hole than I'd expected.
yeah you can write weird shell script and re-implement hot vm migration (moving a physical machine from one physical host to another without shutting it down) but what's the point, really? you might as well use proxmox.
if you ever actually need to see how it's done you could just see what's proxmox doing under the hood (it's open source after all)
It's a sandbox that lets you play without stepping in the mud or throwing rocks. See my other post in this thread for what happens when you try to do it the hard way.
If anyone is looking at proxmox, running these scripts from the host will speed up experimenting.
It’s great to have something up and running in minutes to only redo it slightly differently.
If you are considering two proxmox hosts ensure the second one added to the cluster is an empty proxmox host, and the proxmox cluster from the first box will assign unique id’s to all containers on any host.
Also, for homelabbing, there's a magic incantation that allows you to configure Proxmox away from the default High Availability cluster mode.
By default, the HA mode insists that all nodes are up or others won't come up, making this tweak (it may be contained in the OP scripts, I don't recall) allows you to retain many of the benefits of having Proxmox machines "connected", without requiring you treat them like a HA cluster.
In my case, I have a node I shutdown when it's not used, I use it for specific occasional tasks, and then I have nodes I want up all of the time.
With the non-HA tweaks, you can still do things like centrally manage, migrate VMs and containers between nodes etc, without the limitations of it wanting them all up and available all of the time.
Slightly off topic: I’m setting up a home server on a Mini PC that has Windows 11 Pro pre-installed. I want to attach it to my TV and play retro games as well as run home automation tasks (periodic scripts, UniFi controller, Pihole etc)
Is anyone using Proxmox on their homelabs? Would you recommend blowing away Windows and installing Proxmox and then install Windows with PCIE passthrough?
I actually use Proxmox on my main PC. Ryzen 5950x, 64GB RAM, RTX 4070, AMD 6500XT. The two GPU's are each passed to a Windows and Debian VM respectively, and each also gets a USB card passed for convenience. I run half a dozen other VM's off of it hosting various portions of the standard homelab media/automation stacks.
Anecdotally, it's a very effective setup when combined with a solid KVM. I like keeping my main Debian desktop and the hypervisor separate because it keeps me from borking my whole lab with an accidental rm -rf.
It is possible to pass all of a systems GPU's to VM's, using exclusively the web interface/shell for administration, but it can cause some headaches when there are issues unrelated to the system itself. For example, if I lose access to the hypervisor over the network, getting the system back online can be a bit of a PITA as you can no longer just plug it into a screen to update any static network configuration. My current solution to this is enabling DHCP on Proxmox and managing the IP with static mappings at the router level.
There are a few other caveats to passing all of the GPU's that I could detail further, but as a low impact setup (like running emulators on a TV) its should work fairly well. I have also found that Proxmox plays well with mini PC's. Besides the desktop, I run it on an Intel NUC as well as a Topton mini PC with a bunch of high-speed NICS as a router. I cluster them without enabling the high availability features in order to unify the control plane for the three systems into one interface. It all comes together into a pretty slick system
I did this for a while where I ran multiple VMs, some of which had PCIE passthrough for some GPUs on both Windows and Linux. Luckily my motherboard separated out IOMMU groupings to make this work for me. While you _could_ do this, you may run into issues if your IOMMU groups aren't separated enough. The biggest issue I always had was drivers always causing issues with Windows. I eventually blew the entire instance away and just run Windows.
I'd recommend a separate device if you need any access to a GPU. But I do recommend Proxmox as a homelab. I still have it running on a separate 2012 Mac Mini.
My use-case is slightly different, but I use Proxmox for my home server and would recommend it. Especially if you're familiar with linux systems or want to learn about them which I've done through the years I've been using this setup.
My server was originally a single debian installation set up to host local services for things like git. That grew into hosting a site, vpn, then some multiplayer game servers. When I reached a point where too many things were installed on single machine, I looked at vm options. I've used VMWare/VSphere professionally, but settled on Proxmox for these main reasons: easy to set up and update, easy to build/copy vms, simple way to split physical resources, monitoring of each vm, and simple backup and restores. All without any weird licensing.
That server houses 4 vms right now. That might be a bit much for your mini pc but you could do a couple. The multiplayer servers are the main hog so I isolate resources for that. The windows machine is only for development which isn't your exact use case. I can say however that I've never had issue when I need it. Only thing I can't speak for is the need for graphics passthrough.
I have run proxmox for several years, rely on it for many bits of house and network infrastructure, and recommend proxmox overall. My desktop also runs proxmox with PCIe passthrough for my "actual" desktop (but this is a different proxmox server from the primary VM and container host for infrastructure).
That said, I wouldn't mix the two use cases either initially nor over the long-term. House/network infrastructure should be on a more stable host than the retro-game console connected to your TV (IMO).
In your case, I'd recommend buying another PC (even an ancient Haswell would be fine to start) and getting experience with vanilla proxmox usage there before jumping straight into trying to run infra and MAME/retro gaming on PCIe passthrough on the same singleton box.
I’m in the middle of this. Got a Bee Link miniPC. It came with windows, licensed oddly. I’m configuring it as a home server. Current plan is to migrate my Unraid install over from the vintage server it’s currently on. Most services are run in docker. We’ll see how performance is.
ProxMox is on my list to try out. So far I’m very happy with Unraid. It makes it easy to set up network shares, find and deploy containerized services, and handles VMs if you need them. I try to avoid the VM and focus on containers because it’s more flexible resource wise.
If the pc is beefy enough (win 11 pro runs smoothly) just go with the included hyper-v. Imho you don’t get any benefits installing proxmox on bare metal in this scenario. YMMV of course
You want to use Hyper-v, you can use GPU-P(Gpu Partitoning) where hyper-v will pass through the GPU to the VM and share it, it's not some emulated adapter, it's genuinely the real GPU and runs natively and you can share it across multiple VM's and host. Linux has NOTHING that can compete with the feature.
If its just a few simple things you list, I might stick with HyperV. If you care about more sophisticated VLAN'ed networking setups I would probably go proxmox. But hardware passthru is a can of worms so understand there will be a tradeoff.
Can someone explain why it is useful to do do virtualization at all when you just want to run a small amount of things like this?
I have a ubuntu server install running on an old laptop to do very basic background jobs, backups, automation, run some containers etc. – am I missing something by not using a hypervisor? What are the benefits?
I've used this and It's been the easiest way to set up proxmox, configure the updates, and add HAOS to it. I have this bookmarked so I can recreate it later.
Wondering the same. Also a libvirt and virsh user here, and pretty happy using commands such as `virsh snapshot-create ...`. I've used VMWare ESXi in the distant past, and from the screenshots it looks like Proxmox is maybe inspired by VMWare? So maybe a more polished and integrated GUI based experience than libvirt? Also looks like Proxmox supports containers in addition to virtual machines. Also I see from other comments that there's some features to migrate VMs between hosts, which would be a much more manual effort with libvirt.
[+] [-] generalizations|2 years ago|reply
After poking at a couple of these, they seem like they're 50% shiny packaging and 50% one-liner bash commands.
[+] [-] whalesalad|2 years ago|reply
[+] [-] j45|2 years ago|reply
One could definitely write them on their own.
It’s also another way to discover other packages in these categories.
Some of the LXC container settings can be a little specific and these scripts manage those we’ll most of the time with a standard install, or an advanced one is available too.
[+] [-] chronicsonic|2 years ago|reply
I do a quick look over of the source to make sure it’s ok. It saves me so much time and pain.
[+] [-] wutwutwat|2 years ago|reply
[+] [-] AdamJacobMuller|2 years ago|reply
There's definitely some cool stuff here (disabling the nag screen, thank you!) but I really dislike the way it's presented.
I don't like the curl|bash very much, but mostly I think this would be far more interesting as a collection of one-liners to accomplish each task vs the current form of a curl|bash with dialogs and prompts. I think that's better for security and better for learning you should know what commands you're executing!
[+] [-] m463|2 years ago|reply
With docker it seems a Dockerfile is a self-contained recipe. You build it, then you run the container.
with proxmox, you sort of have to be a sysadmin. The proxmox UI helps you define the characteristics of the container, but then you have to put everything inside the container yourself without help or reproducibility.
[+] [-] hiAndrewQuinn|2 years ago|reply
[+] [-] helpfulContrib|2 years ago|reply
[deleted]
[+] [-] SV_BubbleTime|2 years ago|reply
[deleted]
[+] [-] sschueller|2 years ago|reply
It's working so well I now have too many vms and need to get a bigger disk.
[1] https://sschueller.github.io/posts/wiring-a-home-with-fiber/
[+] [-] VectorLock|2 years ago|reply
[+] [-] secabeen|2 years ago|reply
[+] [-] fb03|2 years ago|reply
I'm always reading stuff about people using proxmox but don't really understand its 'edge'
[+] [-] kube-system|2 years ago|reply
[+] [-] Modified3019|2 years ago|reply
While things like TrueNAS exist with ZFS support, in the past iX systems has made some overeager patches to ZFS, to bring in new features that weren't appropriate to include yet IMO. I hate to dig up old news, but that's why I've avoided them so far.
Basically I've got Proxmox "figured out", which lets me fuck around with the things I want, rather than the things I don’t want. This reduces my very subjective mental overhead.
All the high availability stuff is wasted on me though, and I just disable the related services.
Note to new users experimenting with proxmox The most common "gotcha" with proxmox, is that it does NOT automatically adjust network stuff after install. If you switch out a network card or change what physical port it's connected with, you will lose network access, and need to manually edit `/etc/network/interfaces` to make sure your using a correct port ID and/or assigned IP are valid.
[+] [-] peddling-brink|2 years ago|reply
It has a webui with a bunch of features.
If you don't like or want those things, that's fine. This question feels like a "no true power user would...".
[+] [-] mrsilencedogood|2 years ago|reply
In the same amount of time, if I were diving in to DIYing it, I'd probably be about 5 minutes into reading the arch linux wiki page for KVM/qemu.
[+] [-] plagiarist|2 years ago|reply
[+] [-] pbnsh|2 years ago|reply
[+] [-] darken|2 years ago|reply
I had been running a k3s cluster (k8s flavor) on some Raspberry Pis, but decided I needed some non-ARM nodes, and "invested" in a few low power "1L" AMD64 PCs (6-8 core + hyper-threading). I was initially going to just install Ubuntu and base my setup off my existing Ansible automation to make things less inefficient. But I figured I'd play around with Proxmox first and see if there was any benefit to using that as a base layer since I'd heard a lot about it.
I'm so glad I did. I ended up learning quite a bit in the process. Some quick highlights about using Proxmox for VMs in general:
* Proxmox supports creating a "cluster", so you can login though one machine to administer them all. You can conveniently "move" VMs between machines pretty seamlessly.
* If you install the para-virtualization drivers for e.g. Windows or Ubuntu VMs, you can do pretty fast remote KVM. E.g. I could run Youtube on a Windows VM in my basement over "Spice" and it almost looks like it's running locally. (Not that it's a use case I care about, mostly just shows the fact it's performant.)
In terms of actually getting around to deploying k3s on top of the infra:
* I ended up learning HPC-Packer, and HPC-Terraform, which integrated nicely with my existing Ansible experience.
* Packer turns an Ubuntu ISO + my "base" Ansible setup playbooks into a pre-baked machine template directly in Proxmox. (My local machine's Packer binary just orchestrates the process.)
* Terraform deploys the machine template into the Proxmox cluster. Basically a config file of machine names + IPs + mac adresses, and a few other params and initial setup.
* Ansible then installs any final dependencies (anything not in the base template), setups up the first k3s master, grabs the join token, and adds in 2 more master nodes for a proper `etcd` backend.
* Ansible then installs my base Kubernetes services (cert-manager, Rancher, Longhron storage, etc) via running helm commands on one of the nodes.
* This is where I'm at now; the next step is for me to deploy my existing Flux.cd-automated "Gitops" apps (built for ARM64+AMD64 via Gitlab runner, also in Proxmox). These _had_ been running on my now-quite-crusty-seeming Pi cluster.
I can run a single command to delete all the VMs, and rebuild + setup everything (full HA cluster + apps deployed and running) from scratch in ~6 minutes without any manual input required from me, just a few secrets/params in a config file.
This has made exploring the horizon of possibilities _so much easier_ without getting locked in; I can try to weird Longhorn storage configs, or try out k8s monitoring stacks without worrying about needing to "back out" my changes if I picked bad settings. (Just blow it up and try again!) I can change how VLANs are configured in early steps, or try adding a library to the base Ubuntu install cluster-wide super easily, etc.
I am primarily a software-engineer, so it has been really nice to delve into the operational side of things, and get a proper reproducible setup. It really has transformed how I think about the cluster in that it's no longer a "thing to carefully maintain", but instead a great sandbox to explore AND deploy my own k8s applications on top of without playing cloud bills.
My Proxmox journey in the past few months definitely turned into more than a rabbit hole than I'd expected.
[+] [-] znpy|2 years ago|reply
yeah you can write weird shell script and re-implement hot vm migration (moving a physical machine from one physical host to another without shutting it down) but what's the point, really? you might as well use proxmox.
if you ever actually need to see how it's done you could just see what's proxmox doing under the hood (it's open source after all)
[+] [-] jtriangle|2 years ago|reply
Like any distro, it has some quirks, but, those are worth dealing with because the day to day is almost always frictionless.
[+] [-] gosub100|2 years ago|reply
[+] [-] oriettaxx|2 years ago|reply
[+] [-] j45|2 years ago|reply
It’s great to have something up and running in minutes to only redo it slightly differently.
If you are considering two proxmox hosts ensure the second one added to the cluster is an empty proxmox host, and the proxmox cluster from the first box will assign unique id’s to all containers on any host.
[+] [-] alias_neo|2 years ago|reply
By default, the HA mode insists that all nodes are up or others won't come up, making this tweak (it may be contained in the OP scripts, I don't recall) allows you to retain many of the benefits of having Proxmox machines "connected", without requiring you treat them like a HA cluster.
In my case, I have a node I shutdown when it's not used, I use it for specific occasional tasks, and then I have nodes I want up all of the time.
With the non-HA tweaks, you can still do things like centrally manage, migrate VMs and containers between nodes etc, without the limitations of it wanting them all up and available all of the time.
[+] [-] nonane|2 years ago|reply
Is anyone using Proxmox on their homelabs? Would you recommend blowing away Windows and installing Proxmox and then install Windows with PCIE passthrough?
[+] [-] skazazes|2 years ago|reply
Anecdotally, it's a very effective setup when combined with a solid KVM. I like keeping my main Debian desktop and the hypervisor separate because it keeps me from borking my whole lab with an accidental rm -rf.
It is possible to pass all of a systems GPU's to VM's, using exclusively the web interface/shell for administration, but it can cause some headaches when there are issues unrelated to the system itself. For example, if I lose access to the hypervisor over the network, getting the system back online can be a bit of a PITA as you can no longer just plug it into a screen to update any static network configuration. My current solution to this is enabling DHCP on Proxmox and managing the IP with static mappings at the router level.
There are a few other caveats to passing all of the GPU's that I could detail further, but as a low impact setup (like running emulators on a TV) its should work fairly well. I have also found that Proxmox plays well with mini PC's. Besides the desktop, I run it on an Intel NUC as well as a Topton mini PC with a bunch of high-speed NICS as a router. I cluster them without enabling the high availability features in order to unify the control plane for the three systems into one interface. It all comes together into a pretty slick system
[+] [-] alexgaribay|2 years ago|reply
I'd recommend a separate device if you need any access to a GPU. But I do recommend Proxmox as a homelab. I still have it running on a separate 2012 Mac Mini.
[+] [-] briangray|2 years ago|reply
My server was originally a single debian installation set up to host local services for things like git. That grew into hosting a site, vpn, then some multiplayer game servers. When I reached a point where too many things were installed on single machine, I looked at vm options. I've used VMWare/VSphere professionally, but settled on Proxmox for these main reasons: easy to set up and update, easy to build/copy vms, simple way to split physical resources, monitoring of each vm, and simple backup and restores. All without any weird licensing.
That server houses 4 vms right now. That might be a bit much for your mini pc but you could do a couple. The multiplayer servers are the main hog so I isolate resources for that. The windows machine is only for development which isn't your exact use case. I can say however that I've never had issue when I need it. Only thing I can't speak for is the need for graphics passthrough.
[+] [-] sokoloff|2 years ago|reply
That said, I wouldn't mix the two use cases either initially nor over the long-term. House/network infrastructure should be on a more stable host than the retro-game console connected to your TV (IMO).
In your case, I'd recommend buying another PC (even an ancient Haswell would be fine to start) and getting experience with vanilla proxmox usage there before jumping straight into trying to run infra and MAME/retro gaming on PCIe passthrough on the same singleton box.
[+] [-] pbronez|2 years ago|reply
ProxMox is on my list to try out. So far I’m very happy with Unraid. It makes it easy to set up network shares, find and deploy containerized services, and handles VMs if you need them. I try to avoid the VM and focus on containers because it’s more flexible resource wise.
[+] [-] haraldooo|2 years ago|reply
[+] [-] maldev|2 years ago|reply
[+] [-] JamesSwift|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] znpy|2 years ago|reply
But i'd be happy to be proven wrong.
[+] [-] __jonas|2 years ago|reply
I have a ubuntu server install running on an old laptop to do very basic background jobs, backups, automation, run some containers etc. – am I missing something by not using a hypervisor? What are the benefits?
[+] [-] Zardoz84|2 years ago|reply
[+] [-] aaronax|2 years ago|reply
[+] [-] asylteltine|2 years ago|reply
[+] [-] unixhero|2 years ago|reply
[+] [-] lxe|2 years ago|reply
[+] [-] op00to|2 years ago|reply
[+] [-] oriettaxx|2 years ago|reply
[+] [-] allanrbo|2 years ago|reply
[+] [-] RulerOf|2 years ago|reply
I'm a virsh/kvm user myself, but I admit I'd probably leverage more features of the platform if the interface was easier to use.
[+] [-] tamimio|2 years ago|reply