Rumble Crumble Logo

Devlog 1

Setting up the Website


This devlog is just going to be about how I’ve set up this website. I’ve predominantly been a web developer for the past decade, which means I’ve done a lot of work with realtime webapps, using React, Node etc. so I could have really gone overboard on this, however, I really just wanted something where the friction between ideas and content creation was as low as possible. So I decided on a static site generator, Jekyll to be precise, along with a GoDaddy domain name and their most basic hosting package. This turned out to be really quite straightforward; especially since Gryph sent me over an initial design and assets, so all I had to do really was sort out the CSS for that.

Then I’d just build the site using docker, and upload the files via FTP to the default web root on the server.

How boring…

So instead I whipped up a quick GitHub action to automatically do that for me whenever I pushed to the main branch of the website repo. That looked like this:

name: 🚀 Deploy Site

on:
  push:
    branches: [ "main" ]
  pull_request:
    branches: [ "main" ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    - name: 🔨 Build the site in the bretfisher/jekyll container
      run: |
        docker run -v ${{ github.workspace }}:/site --entrypoint bash bretfisher/jekyll -c "bundle install && bundle exec jekyll build"
    - name: 📂 Sync files with starcove.studio
      uses: SamKirkland/FTP-Deploy-Action@v4.3.4
      with:
        protocol: ftps
        server: ${{ secrets.FTP_HOST }}
        username: ${{ secrets.FTP_USER }}
        password: ${{ secrets.FTP_PASSWORD }}
        local-dir: ${{ github.workspace }}/_site/
        server-dir: public_html/

I hadn’t worked with GitHub actions before, but it turned out to be really easy, and this was all set up and working very quickly. The general gist of this is that it uses a docker container to build the jekyll site, and then uses FTP to just upload that folder to the public_html folder on the server.

Now this was back in February time, so this landing page has been around for a while, just sitting there without any posts at all (hopefully I’m going to change that now going forward!) and I was just recently getting ready to start doing some writing for this, however I realised it was going to be a little bit difficult to show Gryph what I was working on without it going “Live” to everyone on the website. I couldn’t just build the site and send over the files, because you need a webserver to be able to actually load the css properly, so would need a load of faffing around either showing Gryph how to set that up, or getting a local server port forwarded through or installing docker on his machine, or any of the infinite possibilities, but I wasn’t going to settle for that, I wanted to tinker some more! (I believe this is called procrastination)

My aim was to have dev.starcove.studio as a thing, sort of a staging environment if you will, now this is normally a pretty easy task to do, just:

  1. Set up a DNS A record to point the subdomain to the same host
  2. Sort out some sort of routing on the server to handle requests from that subdomain

Step one was easy with GoDaddy, it’s step two that got me, due to the “simple” approach that I decided on before - the basic hosting package.

Now pretty much what you get with that hosting package is a server that has apache installed on it, with CPanel configured to allow you to modify files on there from the web. So I tried enabling SSH and seeing what was going on on the actual box. Turns out all the apache config is locked away, and you don’t get access to nearly anything, no ping no curl, and heaven forbid they give you sudo access to actually install anything.

One thing that was pretty cool with GoDaddy was their “Install Application” feature, the Installatron - I used it to install matomo in one click!

So without buying another hosting package from GoDaddy and pointing the subdomain at that, there really was no way that I could think to get this subdomain stuff to work (I could have just uploaded the dev build into a subfolder on the server and called it a day with something like starcove.studio/dev but I was getting mad at this point because it should have been easy and they just made it difficult on purpose to get people to spend more money).

It was at this point that I decided to give up on the GoDaddy hosting and move to a VPS with OVH and do it all myself. At the time of writing they are having a deal on where you can get their most basic package (exactly the same specs as the GoDaddy hosting, but 4x the amount of RAM - 2GB as opposed to 512MB) for about £0.80 a month for the first year! That beats the ~£60 I spent on the GoDaddy host for the year by quite a sizeable margin!

Setting up the VPS

Even though I’ve set up countless linux boxes over the years, I still always have to jog my memory with some of the intricacies, like disabling password login over SSH. Once that was all set up though and ready to go it was time to start thinking about how to actually serve the files.

There are pretty much infinite ways to do this, in the distant past I may have set up apache with some convoluted config to point to different directories, or nginx or something - more recently though I’d use haproxy to route the subdomain traffic to different ports internally. These days docker is my absolute go to and so with this all in mind I searched up a lightweight nginx container to just serve the static files and what I got really surprised me.

nginx-static is the image that I found, and in the description it specifically mentions that this alone cannot be used to serve files over HTTPS, and it recommends træfik as a “lightweight” reverse proxy that has docker integration. This then got me to reading, and I really don’t know how I had never seen this before, it’s quite literally the most amazing reverse proxy I’ve ever had the pleasure of using. See I would have just gone with haproxy and either copied the config from another project or struggled through the documentation, but this just made it so easy. It literally allows you to sort out all the networking and routing to different docker containers automatically as long as they’re in the same docker network! You just set up the rules in the compose file for how that particular container should be reached and it just does everything. It also does so much more than what I’m using it for, so I will definitely be suggesting this at work and using it in all further projects that require this sort of thing for sure.

And then here’s the kicker, it can also automatically generate TLS certificates using letsencrypt, and renews them automatically as well! How GoDaddy go about charging £20 per certificate is beyond me when this sort of thing is available.

This also let me set up matomo again very easily, I just got the matomo docker image in a compose file, connected it to the traefik network and it was done (almost as easy as the one-click installation from GoDaddy’s side).

So yeah, basically just two of those nginx containers set up pointing towards different directories on the VPS, serving the built static files through the traefik reverse proxy, to starcove.studio and dev.starcove.studio.

GitHub Actions Updates

The final part of this was getting the GitHub actions working again, as I wanted to both: push the built files to two different places depending on the branch, and move away from FTP.

The move away from FTP was simply because I wanted to remove any sort of password authentication with the server completely, instead moving towards key-based authentication. So I found a SCP GitHub Action - SCP Deploy Action - and implemented that, simple.

The publish step looks like this now, with that SCP change implemented:

- name: 📂 Publish
  uses: nogsantos/scp-deploy@v1.3.0
  with:
    host: ${{ secrets.SSH_HOST }}
    user: ${{ secrets.SSH_USER }}
    key: ${{ secrets.SSH_KEY }}
    src: ${{ github.workspace }}/_site/*
    remote: ${{ secrets.SSH_DIR_LIVE }}

I’m not too familiar with GitHub Actions as I’ve said before, so the simplest route I could find to do this to different locations based on the branch was to just make two workflow files and modify the on push branches like so:

on:
  push:
    branches: [ "main" ]
  pull_request:
    branches: [ "main" ]

Changing main for develop when on the develop branch, and then just having a different remote location for each.

This all works absolutely great and I don’t think I need to tinker any more with it, the whole workflow is very simple and requires almost no input from me any more. Even though I did spend a whole lot more time on this than I would have liked, I learned something new and honed my current skills, so not a complete waste of time!

The procrastination from actual game dev stops now!

(until the next side quest comes along)