Messing with Proxmox

2023-11-08

I recently grabbed a cheap-ish server box from Hetzner's Server Auction, with an AMD Ryzen 7 1700X (8 cores 16 threads), 64GB of RAM and two 512GB SSDs, for the low price of ~£34.06. This was with the intention of messing around with Proxmox, preparing myself for the eventuality of building my own virtualisation server. I've had some exposure to things like Proxmox before, with ESXi/vSphere powering floofy.tech's Kuberntes cluster, but for the most part my virtualisation experience was mostly around QEMU/virt-manager, VirtualBox and some VMWare Player - or renting a virtual private server from a cloud provider.

But alas, the need for a dedicated virtual machine box eventually came. My partner helps run several VRChat based club events, which requires a dedicated streaming box to function smoothly. The plan is to upgrade my personal desktop and move the components I currently have in there (Ryzen 7 5700x, 32GB of memory, RTX 2080 SUPER) and move it to a rackmount server, then running Proxmox on that server for at least one large Windows virtual machine with the GPU passed through (running vMix, which I've been told is a beast), and another for running a VRChat bot/camera account with its own GPU (a GTX1070 we have). Before sinking time into doing the experimentation on my own hardware, I rented the dedicated box from Hetzner to do some exploration.

Step one was installing Proxmox itself. I decided to go the route of using the ISO provided by the project rather than installing it on top of Debian, since it seemed like the right way to make sure I had everything. Initially I believed I would need to use a network KVM provided by Hetzner to mount the image file and boot from it, but that didn't work as expected - the KVM was clunky and while I managed to get through the installer it didn't appear to install correctly. While doing research, however, I happened across this gist that walked through installing Proxmox not through the KVM, but rather using the rescue system and QEMU - mounting the drives in QEMU, using VNC to remote into the QEMU instance, and running the install. It felt crazy, but it worked, and it was off to the races!

After rebooting, and updating packages, I found that out of the box Proxmox was configuring to use their paid support plan-only package repositories. I was a little conflicted about this decision, but after some consideration it makes sense. If you know what you're doing, it's trivial to swap over the repositories to their "No-Subscription" repos, and you probably don't need their support anyways. But if you don't know what you're doing, and you're deploying in production, this out of the box behaviour pushes you in the right direction to get the support you might need. While the "Non production-ready repository enabled" notice is unnerving, I happily run the unstable branch of NixOS/nixpkgs on my machines, and ran Arch Linux for a long time, so I'm a little less sensitive about being on a slow and steady software track. The "no valid subscription found" popup on login, on the other hand, is more annoying, and I would prefer a way to turn that off without tinkering with the JS for the Proxmox frontend. While € 8.75/month isn't a lot for their base subscription, I'm still in the experimentation phase and while I will likely end up with a license to support them I'm not quite at that stage yet.

Step one was setting up and locking down networking and firewalls. I quickly got the machine joined up to my tailnet - thanking Proxmox for not taking the appliance approach for their system - and added some firewall rules to it from Hetzner's interface, ensuring only my tailnet machines have full access to important ports like SSH or Proxmox's web interface, and allowing traffic through to most of the unreserved ports (1024-65535). At this point, I had a fully functioning Proxmox box to do with what I pleased - so what do I do with it?

Step one was getting a simple virtual machine created, so I grabbed a Debian ISO I had lying around (remember to seed your Linux ISOs folks!) and got to work. Immediately, confusion. I had to get the ISO on to the box, but had two items under the "Storage" section of the interface, and it took me a minute (plus a quick search) to figure out what one, "local", is where ISOs and container images are managed, while "local-zfs" is where disk images are stored. Perect! ISO uploaded and virtual machine created, install process comple- wait, the machine isn't picking up a network configuration?

Networking has always been a bit of a weak point in my knowledge of servers, so I was terrified. After some failed attempts to setup the static configuration, I stumbled across this blog post describing how to use a lightweight LXC based container ("CT" under Proxmox) to act as a DHCP server. And even better, they were using Hetzner - perfect! We were in business. The LXC container functionality is a little odd - it's close to Docker, but you can't pull down OCI images which is what I'm used to. I don't intend to use it much but I should understand how it works at least.

Anyways! DHCP out of the way, I got Debian installed. The machine doesn't do anything, but it exists and it was time to setup some other machines for testing. First up - Windows! Theoretically this was straightforward, until the Microsoft website decided I was unworthy and blocked me from downloading ISOs. In the back of my mind, I remembered a tool that could build Windows ISOs for me, but for the life of me I could not remember the name of it. It took me a solid day of Googling to track down UUPDump.net, and after a little bit of trial and error, including patching the scripts to work nicely with NixOS (mostly pointing /bin/bash to /usr/bin/env bash), I got an ISO built and uploaded to the Proxmox box. Installation was fine, clicking through "I don't have a license key" and bypassing the online account creation process. As part of this testing, however, I needed to test Parsec. Easy enough, until I hit firewall issues...

Usually, Parsec will try to make a peer-to-peer connection, or route through their STUN/TURN servers if behind a firewall. However, due to the "double NAT" (I think?) setup I had going, neither of these options seemed to be working. It's worth noting that at this point, I hadn't fully figured out my firewall rules within Hetzner, and didn't have the unreserved port range allowed. Long story short, I had to add that unreserved port range, and it managed the connection fine after that. Besides needing to do software encoding in a virtual machine, Parsec worked great, and that was another thing ticked off my list.

I had two things left on my list - some minor stress testing, and figuring out a more robust router setup. The first one is easy - I setup a little Minecarft server machine using the TurnkeyOS based MineOS and after some iptables rules to forward port 25565 to the machine and exposing the 10.10.10.0 subnet through Tailscale, I had a nice little server running that others could connect to. Performance isn't fantastic, but it's an older 1700x so I wasn't expecting too much. But I wanted to take it further - for a long while, I've had Pterodactyl.io in the back of my mind, and this felt like the perfect time to give it a go.

Pterodactyl is way more complete than I expected, and this might be in part due to the fact the last time I looked at it was pre-1.0. It's split into two parts, a PHP based dashboard/admin interface and a Go based "wings" daemon that runs the actual game servers. I initially thought it was one or the other, and ended up with both in the same virtual machine - not necessarily a bad thing, but hosting them seperately makes a ton of sense, and I was really excited by a lot of the functionality Pterodactyl has for managing different "nodes". It might be something I expand on later, but for the time being, after some setup, I got another Minecraft server running along with a Team Fortress 2 server.

If you want to connect to either the Minecraft or TF2 server, you can connect to them at 94.130.153.211. The Minecraft server is creative mode, and the TF2 server will populate with bots. Sourcemod is installed but I have not done anything with it yet. If you want to use either for anything, please let me know and I can grant you op/admin permissions!

Router wise, I wanted to play with opnsense. But as I mentioned before, networking is... not a strong suit. After installing it, and panicking a bit by all the options, I promptly shut off the machine with a TODO item added to revisit it. Don't get me wrong, I'm not totally ignorant to networking, but I wasn't in the right headspace to tackle it at that point and the little DHCP container was working fine.

While writing this post I did a little cost analysis for the virtual machines I currently have. Comparing against Hetzner Cloud's own (shared CPU) offerings, I have the following (costs converted to GBP, so subject to exchange rates):

SpecsHetzner Cost (est)Total
4GiB/2CPU x 2~£5.59~£11.18
8GiB/4CPU~£14.20~£14.20
12GiB/4CPU~£20.88 (16GiB)~£20.88
2GiB/2CPU~£4.54~£4.54
26GiB/12CPU~£50.8

It should be noted that while I have plenty of CPU on the box, it is an older Ryzen chip which may not be directly comparable to what Hetzner is using in their shared CPU offerings. They also don't have a 12GiB box for direct comparison, but I could dedicate 16GiB to the virtual machine in question. So I am saving a decent amount over renting each box individually and have far more flexibility with resource allocation, at the cost of more time spent managing the servers and the NAT networking (which I could work around by ordering more IPv4s from Hetzner). This table also doesn't account for disk size/usage, and the networking performance for each box.

And that's where I am with Proxmox right now! It's a little weird not using NixOS for a machine, but for a dedicated virtual machine host it feels like the right move. There is a Terraform provider, but I haven't given it a look yet and frankly the machine it's installed to and the VMs themselves would likely be pet machines (rather than cattle) anyways. Proxmox being based on Debian, rather than taking the appliance approach that TrueNAS (CORE) does, makes it much easier to tinker with and feels far more flexible, as there aren't any assumptions about the skill level of the end user.

As a final note, I do still have some free compute on the box, so feel free to reach out to me on the fediverse or Matrix and we can have a chat about setting up a machine or game server for you while I have this running.