Homelab Introduction - Hardware

I’ve been running a small homelab for a while. It started as a VM on my main desktop, and has grown to 3 servers, hosting various docker containers, applications and files. This post is all about the hardware journey I have gone on.

Physical Servers

I currently have 2 physical servers in actual use, and one that I’m working on getting set up. I’ll spare you their internal names, and mostly just refer to them as main, backup and router.

main is basically the continuation of the original VM I mentioned up above. I mainly was looking to start being able to mess around with various Linux utilities and the like while attending college. I had an AMD 8350 in my gaming desktop, which left me with a decent amount of overhead to run the VM. I eventually decided I wanted to invert this, so I installed Debian on the bare metal, added a 2nd GPU, and passed it through to a windows VM for gaming. I decided to go ahead and add some mass storage at this point in the form of two 5 terabyte toshiba drives, striped together in a RAID-1 via mdadm. I ran this config for a while until AMD came out with the first generation of Ryzen processors and I decided to upgrade my computer. It was time to have an actual server.

I was making enough money at my internship in college that I was feeling pretty OK making relatively expensive purchases. Along with my shiny new 1800x, RAM, motherboard, and custom water cooling loop, I bought a basic 4u 12-bay RoseWill server chassis. After getting the new system up and running, I plonked the old parts into the chassis, and a server was born!

One of the first things that I did with the server was spin up a few different VMs for isolation. I had one that my friends had ssh access to, one that was solely for my use, and then started spinning up new ones for various services that I wanted to try. The first real “production” application that I ran on my server was Golem, a Discord bot that I (at the time) had written in python as a way to get more familiar with it. I also was using it as a home media server, but only really in a janky way - I had NFS shares set up to be mounted on my laptop, then I would use MPV to watch whatever show or movie I was wanting to with it hooked up to my TV.

The next real phase of the server happened when I decided that I wanted to split the purpose of the server - I wanted a dedicated, more low-power CPU NAS, then just mount storage over the network. I ended up getting a cheap mATX board, a i3-6100 and 8gb of ram. To hold the new server, I went with another RoseWill chassis, this time with 15 hot-swap drive bays. I got another pair of 5T drives (this time Seagate). I had done some research about different filesystem choices and wanted to try out ZFS, so with the old Toshiba drives, I ended up with a pool of 2 mirror vdevs.

I added a few drives as time went on and the pool filled up, before I eventually decided I wanted a bit more space. I bought a used Supermicro 846 off of ebay, coming with dual mid-tier xeons. At the same time, I upgraded to the same motherboard, but with dual clock-speed optimized 12-core xeons, and 64 gigs of RAM. At this point, I realized that I really didn’t need these two machines separate. I had a 25G Infiniband connection between the two for low-latency, high bandwidth network storage, which had started to complicate various things (e.g. there was a file locking thing that gradle needed to do that the NFS version that worked via RDMA didn’t support).

So I decided to move the fast CPUs into the storage chassis. This ended up going fairly smoothly. I had some pains migrating certain services that were directly installed, and I managed to drip sweat on one of the two motherboard, thereby leaving me with a large piece of e-waste. But I ended up with a working server. Again, I didn’t change it much from here - drive updates, a couple iterations of fan and PSU swaps to reduce noise (the server lived in my office most of the time).

Somewhere in there, a friend decided to build his own server, and we completed the project of cloning my media storage onto his server. This worked out well for both of us - we had access to the media and essentially had a remote backup that was within driving distance. However, the driving distance part was about to change - my partner accepted her job up here in Juneau.

To deal with this, I decided I wanted a compact, high-capacity backup server. I decided on an 8-bay U-NAS ITX chassis, using the previous CPU from the storage server and an ITX motherboard. Meet backup. It is really compact and supports 8 hot-swap drives. Again, basic debian install, a couple drives to get capacity up to what main has, initialize the zpool across all of them, and we have a functional container for backups!

It took me a while to nail down my backup scripts. I am using ZFS snapshots and zfs send / zfs recv to replicate the data. Doing snapshots lets me send incremental data, so backups don’t take forever. That peace of mind really let me not worry about losing data when driving from Kansas to Washington, then the 3-day ferry ride.

Finally, we get to a couple weeks ago. I had started feeling the pain of the E5-2643v2 CPUs. They are really old, and one of the main CPU-bound things I do on the server - hosting the annual modded Minecraft server - just wasn’t working anymore. The past 2 had to be hosted on my desktop, which presents some logistical issues. The other thing that finally happened was EPYC parts were finally cheap! I got a 7371 (clock speed optimized, yet again), a single-socket motherboard, and 256 gigs of DDR4. It feels great to have the new CPU installed and working on main.

router is a lot newer - I picked up a used Dell R201ii off of ebay. The plan for it is to make it into a Debian router, host my LDAP server, host my nginx reverse proxy, and host my 2 DNS VMs.

Software Development in the Rainforest

Programming, DIY, Hiking and more


What kind of stuff am I running in my homelab?

By Luke Ebersole, 2023-10-30