I like to think I know exactly what I’m doing when it comes to putting things from idea to code then out into the internet. At this point I even have to admit to myself, despite a heavy serving of self-doubt, that I can actually produce pretty good stuff that ends up on the internet. Working professionally for 13+ years should result in that if you try at it for that long, I would hope.
Nonetheless, when it comes to “hack it together and put it online” vs. “doing it The Right Way” I start to see my own go-to shortcuts become glaringly obvious. It’s rare, for me, to dive head-first into a personal project then step back and say “hold on, did I design this right?” But with this new project (I’m still working on a cool name), as I’ve moved from “local dev server” to “can this actually run out there on the internet?” I’ve hit a few bumps.
This new project is a fun little MERN stack which means it’s snappy and it’s Typescript to the maximum, which is, apparently what I really like1. So locally, building this bad boy is a breeze. You got your client here and your server there and two commands later you’re running them side by side, sharing types, talking to DB layers with a few simple models and methods. We’re cooking with gas2.
But then comes the fun part: How do you get this out to the internet?
Plans? Oh, I got plans alright.
My history with building web apps from scratch has always been in PHP. You futz around with a LAMP (or WAMP or MAMP) server, drop some PHP files into a folder and load in some JavaScript and CSS — heck, maybe you compile the JS and CSS! — then write until it’s done. Copy/Paste your files out to a server running Apache and PHP with a MySQL database instance and it all kind of works3.
Diving into MERN, I was a little unsure of how to do things, but I figured… it can’t be that more complicated, right?
The short answer is: It really isn’t that much more complicated, but I wanted to do things The Right Way. I wanted to use AWS. I wanted to use a system that I kind of know and is relied on by numerous companies and organizations across the globe. Plus, it’s theoretically pretty simple to setup. So why not?
Here’s what I had in mind:
Use MongoDB Atlas to host a MongoDB database for me
Setup a CodePipeline to read my monorepo for new commits on Github and deploy them
Register a domain and set it up in Route53
Build the assets for the front end that point to my new domain for the server and deploy to S3
Point my domain at the S3 bucket to kick off the frontend
Build the server assets that point to my MongoDB and deploy to ???
Point a subdomain at my server using ??? for the API
Easy enough. Right?
Sort of.
Running things The Right Way
According to many articles I read but have since lost in 3+ days of trying to do get things deployed to some sort of internet facing space, monorepos aren’t necessarily the right way to do this. I kind of knew this and yet I persisted. I figured: This is scrappy, let’s just do it.
But doing things The Right Way would have meant I had two separate repositories — one for my client, one for my server. That way I could deploy them completely separate from one another without a change in one repo affecting the other4.
For The Right Way to work, the recommended flow is:
Deploy my client repo for every change via Amplify
Deploy my server to either
Elastic Beanstalk via CodeDeploy
or build my server as a series of AWS Lambdas rather than a NodeJS server — this would remove the need for a living server and may decrease my overall costs in AWS
Connecting to Mongo Atlas wouldn’t change
Then I could point my domain to the S3 bucket as I expected and use Cloudfront as an edge cache to eliminate any latency when downloading client assets to your browser.
Plus, this is the only way to add HTTPS to an S3 hosted site
So I wasn’t far off. But my setup didn’t exactly meet this flow.
Running things My Way
What’s nice about being not too far off and being a capable person who can dig through StackOverflow posts, AWS documentation, Medium blogs entries, and a myriad of other online resources is that I found a solution that was a nice middle-ground to things but also kicked my brain into the gutter by the end5.
Still using the monorepo and a handful of clever workarounds, my setup has come down to this (for now):
CodePipeline For Deploys
Both the client and the server are built on every push to my main
branch. It’s not ideal if a small change needs to go out, but I’m fine with it.
The pipeline is a simple 3 step process:
Read the repo for new changes pushed to the main branch
Build the assets and push the compiled client to S3
Then zip the compiled NodeJS client and push it to Elastic Beanstalk where the NodeJS-friendly server will automatically kick off the server when it receives the payload (some info on this later).
Domains and HTTPS
Once the code is in place, the domain I’ve registered is setup through Route53 with two A Records pointing at two CloudFront instances that front two S3 buckets (more on this later).
The domain routes to CloudFront and serves the client from S3 and it’s nice and quick. Just as I hoped.
Again, sort of.
Hiccups along the way
The big fucking problem challenge I faced when getting this all working end to end was two pronged:
Elastic Beanstalk is quite complicated if you don’t know how it’s built to run things and compiling the server assets correctly is annoying
Setting up your S3 buckets and CloudFront instances is weirdly straightforward yet somehow very chaotic
Let me break it down.
Server assets for Elastic Beanstalk
Elastic Beanstalk, when setup to run NodeJS and nginx, expects your assets to be structured in a certain way:
Either you provide a zip file that has your assets ready to be executed with the ability to run
node [server|app].js
after being unzipped with node_modules in the root folderOr you provide a zip with a
package.json
that has a script attribute fornpm start
to be run.I think this will run
npm install
too?
Regardless, my setup via the Deploy step in my CodePipeline took several iterations to write a proper buildspec.yml that would:
Compile client and server assets for production and push to S36
Zip the server assets as the final artifacts for deployment to my Elastic Beanstalk instance
Why? Well, I had created a small hiccup in my setup as a monorepo by using a severely handy tool called Lerna that would compile and build/run my server and client all with one command. This is unbelievably nice when you’re building things locally, but because my pipeline for deploying assets for my client and server wasn’t exactly The Right Way, this meant I needed to iterate a whole lot.
On top of this, I had to figure out how to get environment variables onto my Elastic Beanstalk instance for the server to read from… but luckily, I found a handy StackOverflow post that solved my problems real fast.
In the end, using a combination of .ebextensions and buildspec magic, things were set to deploy. And it worked7!
S3, CloudFront, and getting everything Just Right
In hindsight, it’s kind of cute that I thought simply deploying my client to an S3 bucket that was setup to run as a static website would simply enough to just work. Call it optimism, maybe.
But, no, if you want to have an HTTPS fronted S3 bucket, there’s a few steps you need to do and I mostly resisted trying to follow a proper guide because I was confident I could get it working.
It turns out, you need two S3 Buckets for both mywebsite.com and www.mywebsite.com - each bucket named to match the domain. Following the guide, the mywebsite.com bucket should redirect all requests to the www.mywebsite.com bucket. Then you need to create two Cloudfront Instances that map to each of the buckets.
Honestly, just go read the guide if you’re curious. Needless to say, I spent most of a Sunday night trying to step around these rules with my S3 Buckets and CloudFront instances but ultimately settled on starting the guide from scratch and following it to the letter.
Guess what? Now everything works.
Now it’s live, right?
Oh buddy. I wish. I mean, it is out there on the internet, but it’s not ready. I had to do some quick code to basically lock it down the site lest someone find it and it throw a zillion errors into my logs because MongoDB isn’t entirely working. But we’re getting there.
More to come once I get my logging setup correctly and my connection to MongoDB actually working. For now, we’ve got an end-to-end solution and I’m pretty pleased by that.
I’ve got a few questions I still need to consider (namely: how can I integrate Lambdas for offloading some scheduled calls and other infrequent calls that could be timely / stressful to my server), but I’ll get to those in another post.
For now, I’ll leave you with this: I’m still feeling pretty invigorated by this project - whatever cool name I may call it. Not only is it a thing I want to use and have out in the world, but it’s also a good exercise in figuring out where my skills are rusty. It’s amazing how much there is to still learn (or relearn) out there even after working consistently for over a decade in the field of web development.
I like it.
Next time
A brief overview of what the hell we’re actually building here.
Don’t talk to me in 2015 when I thought NodeJS as a server stack was a joke
Well, now it’s ONE command but I’ll talk about that later.
I’m simplifying things, I know, but it feels that easy in my experience… or maybe that’s because I’ve done so much work with it. YOU TELL ME
Regardless of the this being a “best practice,” this isn’t really how I’ve operated since I started building this project. Many client changes usually also include some kind of server change and vice versa.
Seriously. I slept like a rock after all of this and I have to think it was being I was just straining my mind to wrap it around a dozen or more concepts at once with the hope that things would just work.
I did have to sort out some business with permissions, but you get a few VERY OBVIOUS errors if you haven’t set up the right roles in AWS and permissions on the S3 bucket to allow for objects to be copied to the bucket
Except for MongoDB which is another post entirely. I’ll write that up once I get it figured out 😭