Posts & AWS - Like Oil And Water

I'm losing hair, and quickly.

Posts & AWS - Like Oil And Water
💡
This post is a direct sequel to my previous post on Cloudflare Tunnels. You can find that post here.

Hey all.

If you've been keeping up with my updates from the last post, you'll probably know by now that I have not been having a fun time recently. This post is intended to be a log of all of the issues I encountered while attempting to migrate this server from DigitalOcean to AWS, and explanation as to why I ultimately ended up reverting.

Planning Ahead

For those who are unaware, for a period over the last two weeks, this server was running on AWS. This was a premeditated decision; I have cost estimates dating back to August of last year when I originally crunched the numbers for this. I had a student credit on DigitalOcean that was expiring in February 2024 and needed a plan to eventually move to a provider that would keep costs as low as possible.

This is primarily because I'm a cheap-ass when it comes to recurring bills. One of the original reasons that I started my homelab was because I was personally more comfortable with CAPEX vs OPEX spending for my home, and I really didn't want to pay a monthly fee to host services. (Source)

My original napkin math from Q3 last year.

The main reasons I chose to move to AWS were as follows:

  1. AWS was (marginally) cheaper. With the way my napkin math worked out, AWS would have saved me a couple cents per month in comparison to DigitalOcean. This may not seem like a lot, but when you consider that...
  2. I'm already using AWS's S3 and CloudFront to host my image CDN, it made sense for me to have all of my resources hosted by one provider.
  3. As of writing, I'm in the process of earning my AWS Cloud Practitioner certification, so I figured having some hands-on experience would be worthwhile.

So, at the start of this month, I pulled the trigger.

It's important to note that I've been an AWS customer for a while, ever since I set up the CDN for this site, so my free tier qualification had long expired. As such, there was no free t2.micro instance for me to use or whatnot; I had to budget and keep everything as low-cost as possible out of the gate.

I figured that my ideal configuration would be a single t3a.micro instance, one Elastic IP (EIP), and around 15 to 30 GB of EBS storage with nightly snapshots.

Migrating

When this month came around, I started researching ways to migrate with as little hassle as possible. I found a blog post that mentioned using AWS Application Migration Service (MGN), and I decided to give it a shot.

It didn't work, at least for me. I waited a couple hours for my source server to be backed up to an EBS snapshot, then when I tried starting the next step of spinning up a test server with this data, the whole system would lock up and become unresponsive for 10-20 minutes at a time. I tried a few times, before giving up and just provisioning all of my resources myself. Minor setback, but whatever.

I cleaned up the scraps from MGN, and allocated myself a t3a.micro instance. I gave it 30 GB of EBS storage with automatic nightly snapshots, just in case. I also acquired an Elastic IP which I assigned to the instance, so that upstream external services could contact the server.

Setting up the software on the new server was pretty simple. I configured the firewalls for each of my servers to temporarily allow SSH and MySQL communications between the two to facilitate an easy transition. I used a pretty neat SSH trick to migrate the entire database in one command, used the ghost CLI to make a backup of the original server, reinstalled ghost on the new machine, then unzipped the backup into the right place to restore it.

Sidenote: SSH Database Migration Trick

The SSH trick is actually cool enough that I want to document it here. I discovered it on a medium post from @richb_, which can be found here. All you need is the ability to SSH from the source to the target, then run the following:

mysqldump -u srcdbuser -p'srcdbpass' srcdatabase | ssh [email protected] mysql -u dstdbuser -p'dstdbpass' dstdatabase

I was able to move the entire database in less than 5 seconds using this trick. Neat!

Within an hour or so, the new server was up and running with seemingly no performance hits! Now it was time to let it run for a few days, monitor performance, and keep a close eye on my operating expenses. I set up a budget to alert me if the costs went over $10, then went on my merry way.

Checking Up

Over the next few days, I'd occasionally check up on the machine after work, primarily keeping an eye on AWS's Cost Explorer tool. I was astounded to see that my costs were way above projections. I was expecting to see daily usage of around $0.25, but found that my average daily cost on most days was $0.45. Project it out for a month and that's around $13.50, nearly double what I was expecting to pay.

In fact, this is exceptionally ridiculous when you consider that paying for Ghost's hosted service, Ghost(Pro), only costs $11/month on a monthly billing cycle. You'd imagine that a self-hosted service should be cheaper since you're paying in "sweat equity" or whatnot, but in this case I was getting screwed both ways by having to maintain it and pay more for the platform it's hosted on. This sucks.

After some quick googling, I found a new AWS policy that looked to be one of the bigger culprits. At the start of this month, Amazon changed some of the terms of their free tier, such that in-use IPv4 addresses would no longer be free, and now cost $0.005/hour. That's an additional $0.12 per day, ~$3.60 per month, or $43.80 per year. Outrageous! (Source)

At first, I thought that this policy just applied to EIPs, so I configured Cloudflare Tunnel to be my main point of ingress, and released the EIP. I let it cruise for a few more days and checked back in.

No change! I was still being charged for public IPv4 every month via VPC, despite not having an EIP anymore. What gives?

Then I remembered then that all EC2 instances by default have an IPv4 address assigned to them, as it's their main way of communication. In my naivete, I figured that if I somehow removed the public IPv4 address, the tunnel would still work via the private IP, and we'd be golden. However, the plan wasn't as simple as it seemed.

I was operating under the assumption that AWS's VPC provides a NAT for all private networks by default, and that I could use a gateway or something to access the internet. That's not the case; it's a paid service. Removing a public IP address from an instance removes IPv4 internet connectivity altogether, which was not what I wanted.

This left me with no choice but to set up IPv6 on the instance, in hopes that it as a main internet connection would be enough for the tunnel to work, and then everything else would just solve itself.

Configuring IPv6

Configuring IPv6 for the instance wasn't too bad, as AWS has some pretty detailed guides on how to do so already. The guide that I used can be found here.

After getting cloudflared to use IPv6 as its preferred IP protocol, we were off to the races. I was able to access my website from the public domain, which meant that the tunnel was working. Looking at my billing over the next few days demonstrated that my changes did actually exempt me from the fee, and that I was now operating closer to expected costs.

The only unfortunate part of this entire exchange was that I was unable to SSH into my server directly anymore, as I didn't have IPv6 configured on my home network. Although, maybe this would be the kick in my ass to finally set it up.

The actual kick in my ass came when I contacted my ISP and found out that they straight up don't offer IPv6 support. Ouch. Don't get me wrong, I love Metronet's cheap and fast fiber internet, but no IPv6 meant that any project on AWS was now going to be significantly harder or expensive to work on. I would now need an alternative way of connecting to my server.

I tried using AWS's Session Manager tool to allow me to get a shell in-browser, but I couldn't get it to work with IPv6. I also tried using the alternative in-browser serial terminal, but the interface was way too slow and generally a bad experience. I wanted SSH back.

I remembered that cloudflared also had support for routing arbitrary TCP traffic to a VPN, which included SSH traffic. Knowing this, I configured my Cloudflare tunnel to add my server to a private network and route all SSH traffic through the tunnel. Now, to contact my server, all I had to do was launch the WARP VPN, then use the server's private IP to SSH in. It worked!

But... Performance was still less than to be desired. Typing still had input delay, large chunks of output would buffer, and I just wasn't a fan. Nonetheless, this was the best solution I had yet, so I stuck with it.

I, once again, let the server idle for a few more days to see if anything would pop up in the error logs.

Surely enough, there were lots of errors, and all of them were IPv6 related. Mailgun, my email provider for this site, does not support IPv6. My AWS S3 connection broke, and I was no longer able to upload images. And despite Ghost's main server being IPv6 accessible, their update server was not.

I started by tackling the Amazon S3 issue. I was able to remediate the problem by telling the Node.js AWS API to use AWS's dualstack endpoint, which had IPv6 support. However, I then started running into an issue where images would upload but the editor would display an image I had previously uploaded instead. Nothing's ever simple, is it?

I'm Tired, Boss.

At this point, I was just about done. I didn't even look at the Mailgun or Ghost update server issues. AWS had been a nightmare to work with from day one, and I just wanted my IPv4 back without an hourly fee.

I ended up moving the server back to DigitalOcean, on a droplet identical to the one used before the AWS migration. The server's back at 100% now, with no inbound or outbound networking issues, and I also have a locked in rate of $7.70/month for everything, including daily backups. Still cheaper than AWS, even had I taken the effort to iron out all of the IPv6 issues.

I'm still tunneling traffic, just because I like using Cloudflare's ZTNA tool to add 2FA to this site's admin panel.

Sigh. It hurts knowing that I would have been better off had I literally done nothing instead of trying to penny pinch when my student credit expired, but whatever. The stress hairs have already fallen off my head, and wallowing won't get them back.

At this point, I can no longer recommend AWS EC2 for cheap hobbyist projects that require a VM, beyond the free trial. If someone else is footing the bill, or you have to use AWS for compliance reasons, sure. Go for it. But for everyone else, I heavily recommend DigitalOcean. The user interface is a lot more intuitive, it's been no-hassle for me, and it's cheaper at the end of the day.

I'm tired. I guess the plan is now to monitor this instance for any issues, but considering it's the exact same setup as the instance that has been powering this server for over a year, it'll probably be fine.

Have a good one kids, I'll see you all in the next post.