The Homelabber Chronicles

It's about time I told you all about the homelab.

The Homelabber Chronicles

In my last blog post where I spoke about migrating the blog to Ghost, I briefly mentioned my Homelab, specifically the server rack I have in my closet. While writing that update, I realized that I didn't have any homelab content on my blog, which is really surprising to me. As such, I decided it was about time I told you all about the homelab.

Homelabbing has become a significant part of my life and has taught me so much over the past year. It's been such a positive experience that I talk about it with all of my friends and won't shut up about it when the opportunity arises in conversation. The point of writing this article is not only to chronicle my journeys for the blog, but also to hopefully inspire a new generation of homelabbers to keep the fires of curiosity burning.

This article was a ton of fun to write and had me sleuthing around through message histories, purchase histories, emails, logs, and a bunch else to try and reconstruct a timeline to relay to you all. It took a lot of work to put this together, so I hope you all enjoy the read!

Humble Beginnings

When I decided to get into homelabbing, I had been working at my job as a sysadmin for one and a half years already. I had gathered plenty of skills over that time that made me competent at my job but left my curiosity... unsatisfied. The only servers I had worked on at that point were production servers and extremely locked down.  The only person on my team of 5 who had sudo access was my boss, the director of our department, so I couldn't do much.

I decided I needed a sandbox: something more than just WSL on my desktop, that could allow me to experiment with servers and didn't charge my card monthly either. Luckily, the timing of this decision coincided with me preparing to move into a new apartment with a bunch of comp sci and engineering majors, most of which were also interested in setting up an above-average home network.

Only a week before moving in while chatting with my future roommates, I stumbled upon Jeff Geerling's channel on YouTube. If you ask my girlfriend, she'll insist that the moment I found his channel was the worst thing to ever happen to my wallet.

Watching Jeff's videos, I became enthralled with the idea of using Raspberry Pis as my sandbox. His Dramble project was of particular interest to me, as it seemed small but complex enough that it would be the perfect starting point for my sandbox. I bounced the idea of getting a couple of Pis off my roommates and they were cool with it, so I ordered everything I needed to get started.

While I was waiting for the Pis to come in, I spent most of my time on r/homelab, and if there's one major takeaway from that sub, it's that everyone enjoys a good network diagram. I decided to sketch out a projected network diagram for the new apartment, which included the Pi's that I had just ordered.

The projected network diagram. I ended up getting 2GB models, by the way.

My homelab would start out with humble beginnings. The two Pis called "Hermes" and "Hypnos," an M1 Mini that I used for work, ("Patch") and my own souped-up decommissioned 2016 Mac Mini that I used for a few personal Discord bots. ("Artemis")

A week later, the day before my new lease began, I received two Pi 4B 2GBs, two PoE+ Hats, and an unmanaged 5-port PoE+ switch in the mail. Between packing boxes, I found time to assemble them into their acrylic case and image their SD cards, in prep for the new network.

A few days after moving in, I decided to look at the Pi project again. My apartment was scheduled to receive fiber soon™ after moving in, but our timing meant that we had to use Xfinity as our interim service provider. On top of that, in order for us to get moderate speeds with no data cap at a reasonable price, I was forced to use Xfinity's stock router to begin. This meant that for now, I was stuck with spotty cable service and a spot on a waiting list.

Xfinity's stock router certainly handicapped me in what I could do on my network, but it didn't stop me from setting up Pi-Hole on Hermes for any device that was able to set its own DNS servers. Over the following few weeks, I ended up tossing a few more light Python scripts on Hermes as well and taught myself how to use systemd to get the scripts to run on set intervals. One script was a homebrewed solution for DDNS using Cloudflare's API, and the other would send weather forecasts to my apartment Discord server at 8 AM every day.

Hypnos ended up sitting around for a while, inactive. I think at one point Hypnos had the Steam Link software put on it, but the latency in the living room was a little too high for my liking. Also, as it turned out, my roommate had an actual Steam Link that he brought from home which was way better, so Hypnos was replaced after about a week. Back to gathering dust!

NFTs & Leaving Discord

Everything was calm in my network until November 8th, 2021, when Discord CEO Jason Citron teased that NFTs would potentially be coming to the service.

At that point, I practically lived on Discord. All of my coding friends, classmates, long-distance friends, and lots of other important people in my life used Discord to communicate with me. We all vehemently hated NFTs and their environmental impact, and there was a huge online backlash to the news as well. I canceled my Nitro in protest, made one of the thousands of angry anti-NFT posts in their support forums, and actually went so far as to look into alternatives in case the situation worsened. I found some software that I liked but knew that if I were to host it, I would need to dramatically improve my infrastructure before it would be feasible. So after taking a day to research, I bought another domain and took to eBay.

It only took me two days to find a starter deal I liked. A 24-port 1U PoE+ switch for $40. Perfect for me to start with. I bought it immediately and moved it, my Pis, and my Macs into a corner in the living room where my lab would live for a while.

The new switch on my nightstand

I had enough sysadmin experience at that point to understand that my servers should be in a separate "subnet" that wasn't allowed to communicate with the rest of my LAN, but I wasn't experienced enough to know exactly what a "subnet" consisted of or how to even go about creating that environment with only a stock router and managed switch. My initial idea was to use ACLs on the switch because I saw a forum post online that said that some switches had support for an "established" keyword that I could use for some inter-VLAN communication. Alas, that research was pointless.

After struggling for a while, I gave my boss a call and he suggested using a router to do a router's job instead of a switch. Fair point. Luckily he had a few spare Linksys WRT1200ACs laying around, one of which he was willing to loan me. The goal was now to set up a double-NAT, with my Xfinity router powering the house and port forwarding server traffic into this new WRT router, which would power the new server subnet.

The new setup

This setup worked for a long while, as well as a double-NAT setup could. I was able to set up a Minecraft server for my friends, hosted on Artemis, which ran for over a month with no issues, so I would say that it was pretty stable.

Racking It All

I will admit though, that having a rack-mount switch with no rack to put it in left a sour taste in my mouth. I sat on the idea of buying a rack for a few months, mainly because server hardware is so damn expensive, before finally biting the bullet in January. I dropped a few hundred dollars on a 12u desktop rack, a shelf, PDU, and other accouterments I would need to make it feel truly like a homelab.

While purchasing the rack, I found a deal on eBay for a ThinkCentre M93p that I just couldn't refuse. Some important context on myself: I will always blame my father for giving me the gene that throws financial responsibility out the window the second I see a good sale. At the instant I saw the listing, I already had a desire for a Windows machine, so this was a pretty quick decision. That's the new mini-computer at the bottom, and I decided to call it Apollo.

You may also notice the new housing for the Pis in the above picture! I found this 3d-printed rack design through Jeff Geerling's blog. This specific print was made possible in collaboration with the geniuses over at the FSU Innovation Hub. I worked there a few years back and everyone there is like family to me, so if anything this project was a nice excuse to go and catch up. Shoutout to Ren, Eric, Castle, Mason, and the rest of the Fablab techs that helped me out!

Shortly after assembling the rack, I found another deal, this time on a 1U Cyberpower UPS. The gene kicked in again, so I picked it up as well and figured that for the most part, my rack was now pretty much complete. My problem now went from hardware to software. How am I going to host things on these computers, and how am I going to monitor it all?

My job is super old-fashioned, as my department has been around for literal decades and hasn't migrated to the cloud yet, so my original plan was to run the software I wanted directly on the Pis. Just put Ubuntu Server on them, apt-get or curl whatever I needed, edit some conf files, then do everything old-fashioned. Just like work. This setup served me well to start, as everything was pretty familiar.

On top of that, in my office, we have a computer that hosts a dashboard called Nagios that shows us server uptime and sends emails to us when something goes wrong. I decided to use a similar setup to keep track of my uptime as well. Hermes ran PiHole and my python scripts while Hypnos ran Nagios and eventually Network UPS Tools ("NUT") to communicate with the UPS once it came in.

An early screenshot of my Nagios dashboard

Over the next month, I would mainly mess with Nagios. I made a couple of tools for it, which I put on GitHub, and mostly just tried to keep the network stable whenever I got a critical error message via Discord.

Getting a Synology

In February, I had some finances loosen up and was once again in the market for something to spice up the lab. I decided to get a third Pi for the lab ("Hyperion") for hosting Nginx and maybe Argo Tunnel, and I also found a really good deal on a Synology RS816 on eBay that really piqued my interest.

Up until that point, Synology was a name I kept hearing in the homelabber space but never knew what it was other than a NAS solution. At that point, my roommates and I kind of wanted to engineer our own NAS using TrueNAS, since we all watched Linus Tech Tips on the regular, saw that they used TrueNAS, and felt compelled to try it ourselves. However, after doing research on Synology, and comparing against the price I would be receiving this RS816 at versus building a TrueNAS server, I was sold.

Updated diagram for Feb. 2022, before Hera was renamed to Hyperion

The Synology came in with a 4TB WD Purple installed and was put to use the second I powered it on. Everything large went on there: music, videos, my ISO collection, a network game cache, etc. I eventually upgraded it to house four 4TB WD Reds in SHR1, for one-drive fault tolerance and a total of 10.7 terabytes of usable storage with off-site backups, too! On top of this, I ended up replacing NUT with Synology's built-in USB UPS management and also used the NAS to host local DNS after PiHole proved to be spotty. That machine was (and still is) a powerhouse. I absolutely adore it.

Finally, Fiber

On March 23rd of this year, the local fiber company finally got back to me and said I was next on the waiting list to have my service installed. Finally! After waiting over 8 months on this list, we were now able to get gigabit symmetric with a static IP at the same cost as our 800/20 plan with Xfinity.

I think the most important and exciting part of getting this news was the fact that I could finally toss that piece of garbage Xfinity Xfi router into the bin and make my server rack the hub for the apartment.

I spent the next few days learning about routing with OpenWRT: Interfaces, Firewall Zones, VPNs, etc. Eventually, after spending some time asking questions on the forums, I was able to put the OpenWRT router at the helm of my apartment and officially made the swap to fiber.

The initial 4 internal zones for my network

Finally having full control over the network meant that I could do things like setting Pi-Hole to be the default DNS for the network, with 1.1.1.1 as the upstream. The days of Xfinity privacy invasion were finally over. Boy, were we relieved.

The New Era: Containers

A month later in April, I was halfway through a Python and DevOps class, in which the majority of our grade came from a group project that my group had been working pretty heavily on. One morning in class, the teacher threw a curveball at all of the groups: with less than one month till the due date, we had to get it working with Docker.

Up until that point, I had avoided Docker like the plague because the last time I tried to get it to work, I found it was unstable and royally screwed with my WSL install. I remember complaining to lots of people that I had to spend an extra 6 hours of my day fixing what Docker had broken. This time, however, the experience was much more stable and enjoyable. I actually enjoyed messing with it and learning about it so much that it drove me to write my own blog post on Docker.

Now that I knew how powerful of a tool containerization was, I couldn't get enough. I wanted to add containers to my sandbox somehow. After looking back at good 'ol Jeff G's channel, I knew that Kubernetes was the answer. I also knew from my experience talking with other computer and data science interns at FSU that Kubernetes was a pretty big buzzword in tech, and experimenting with it in my sandbox would probably be a wise thing to do. It would certainly get my foot in the door and also give me experience playing with containers in a semi-production environment.

I decided to take all three of my Pi's, make backups, wipe them, and image them with Ubuntu Server and K3S installed. K3S is my preferred lightweight flavor of Kubernetes, primarily because microk8s just wouldn't work for me. Their new nomenclature would be "Siva1", "Siva2", and "Siva3" because I'm a hopeless Destiny 2 nerd who thought it would be funny to name my cluster of self-managing computers off of a fictional cluster of self-assembling, self-replicating nanotechnology.

Along the way, I wrote notes on how I configured the cluster on GitHub, in case anyone's interested.

My cluster dashboard

Nowadays, I'm mainly tinkering with K3S and Docker during my spare time and seeing what I can get this thing to do. I configured the Synology to act as a PVC storage medium via NFS, making adding new apps to the cluster really easy. I'm constantly learning more about how this cluster works, how to make it more efficient, and how cool self-hosting can really be.

Pi-hole went on there relatively easily. Nginx and Homer did as well. I now also have a couple of other cool tools on there as well, like N8N, which has quickly become my replacement for IFTTT now that IFTTT's a paid service, by and large. A week ago, I made the decision to toss Ghost on there and migrate my blog from Jekyll to my own silicon! That brings us to now, where here I am writing away, with lots more experience under my belt than I had a year ago.

The rack, as it sits right now in my closet

Summary

My journey as a homelabber has been extremely enjoyable and fruitful thus far! At the moment, I think the plan for the future is to save some cash for a little while since I admittedly went a little crazy with purchases earlier in the year. I'll start expanding later by buying a few more Pis for the cluster. RAM limitations are starting to become more prevalent, so expanding horizontally or maybe even replacing some of the nodes with ones that have more RAM would be wise.

I also have a bunch of spare PC parts from working on PCs over the past year, so I'm also considering making a small 2U server with an R7 2700X and GT 1030 inside of it for game server hosting or whatnot. The future of this rack is bright, and I'm stoked to keep working with it.

My message to any other student, developer, or fellow nerd that may be considering whether to start their own homelab: Do it!

I didn't realize it until last week, but I've learned so much over the past year alone, all because of this project. It's knowledge that I wouldn't have gotten anywhere else, in either my degree or my job. From routing to switching, systemd to Kubernetes, firewalls, NAS servers, and beyond; Tinkering with this project has been one of the most fulfilling hobbies I've had in years, and I will always highly recommend it to anyone even remotely interested in IT or DevOps.

I expect my knowledge to continue to grow the more that I mess with this rack, and I'm excited to just keep poking at it and see how I can keep incrementally making it better. Now I just need recommendations for more software to run on the cluster. Maybe I'll have a federated software kick next. Matrix and Mastodon, anybody? Who knows.

If you're still reading, I hope you enjoyed the read. As I said in the introduction, hopefully this post will inspire at least one or two other people to start a homelab of their own. If that's the case, then my three days of researching, writing, and editing will all have been worth it. :)

That's all I have for now. Till next time!