I can probably count on both hands how many times in my life I have stayed in a hotel; I can count on a single hand how many times I have stayed in one alone. It just wasn’t something that my family did growing up, we certainly went on holiday but just not in hotels. So as I checked out of the Holiday Inn Express my work was booked for me at 3am, I was hit by the comment “do you ever sleep” by the night manager. I have been working unsociable hours since I joined the work force, and prior to that I won’t pretend that the time of day had much of an impact on my sleep habits whatsoever. With friends, I say “time is aetherial” when prompted about this - with this receptionist I just chuckled. I dread to think the havoc working nights has done to my immune system, but it’s the hand I am forced to play at this time.
I am sure you are thinking “Ed what does this have to do with devops”, to which i will affirm yes this seems like an unrelated topic. What you need to understand however is that hotel rooms are incredibly boring, and there is only so much Ready or Not I can play on my SteamDeck before I need some form of mental stimulation. Yesterday, this mental stimulation took the form of me finally fixing the CICD pipeline that builds this blog.
I imagine some of you will be familiar with the previous post I made on this subject in May, entitled “So… I Guess I’m Learning DevOps?”. Some of you may have even followed the resulting build logs around the topic of me trying to get the CICD pipeline running. If you did read those, I can only apologise - sometimes it takes stepping back for eight months to understand the methods you were employing to fix a particular problem were ill conceived at best, and idiotic at worst.
The context
First, some context - I barely dipped my toe into devops prior to this point.I had it first mentioned to me by a friend called Squirrel in 2020, where he said “you would probably love this”. I next paid attention a few years later when doing some things with my github profile readme, with a runner which automatically updated it to show my recent blog posts. This led to my first attempt to write a CICD pipeline to build my Hugo blog and push to the gitlab pages submodule when I committed a new post to the blog-posts submodule. This did not work - this was mainly due to me being incredibly novice to even some of the basic fundamentals at this point.
I then at some point moved out of my parents house and got my first server, and the second attempt was spawned. I was going to try and do it using Jenkins. I actually got the first version of the pipeline written. It didn’t work, because the jenkins container i had running did not have Python and Hugo installed. At this point my docker knowledge was still at that point in the dunning kruger curve where you think “I understand this”, before you fall into a pit of imposter syndrome. For reasons I cannot fathom right now, I did not want to use a dockerfile to fix this issue, i instead wanted to host my own local docker registry and do it that way (it was lost on me that Gitea had its own docker registry at this point also). Looking back, this reasoning was not totally unfounded - You see I was using portainer to manage my docker deployments at the time, and the Community Edition of portainer did not play nicely with dockerfiles (or it did and i was just too much of a noob to figure it out). This excuse doesn’t totally vindicate me, I was definitely overcomplicating it. In any case, this the project went on the back burner.
Fast forward eight months, and I have matured slightly with my homelab. Portainer became Komodo, Gitea and Jenkins became Gitlab, and I slowly became competent enough in docker to no longer have to ask ChatGPT every time I needed something fixed - I just did it. I also decided by this point that I wanted to move towards a GitOps deployment model, living by the ethos “if it is not version controlled, then it is not important”. I began to respect git as a platform with reverence, and deployed tools like mend renovate to tighten my control over what versions are deployed, infisical to handle secrets, and traefik to handle automated Layer 6 routing based on docker labels. Git became to me what the machine spirit is the Adeptus Mechanicus, a source of truth - hidden in the background powering the tech I depend on everyday.
Of course, there were problems. Two problems specifically.
- I could not deploy gitlab from within itself, and thus the deployment was effectively decoupled from the repo
- I could not use Komodo to deploy Komodo, causing a similar situation of state drift
I have not figured out a solution to the first one, but the second one I know will involve a CICD pipeline to amend. Thus for the fourth time in my life, I brushed the spellbook to this arcane knowledge and vowed I would fix my blogs pipeline as a means to learn it.
The Pipeline
In many ways, building my blog is one of the simplest things in the world. If I was to do it by hand i would have to do the following
cd hugo-site
./build-site.sh
cd public
git add .
git commit -m "feat: publishing x post"
git push
Much of this simplicity comes from the fact that at one point I decided that every post needed to gpg signed. That shell script contains five lines.
#!/usr/bin/env sh
set -eu
python3 scripts/generate_from_source.py --clean
hugo -t risotto
The last two being the most pertinent.
Running python3 scripts/generate_from_source.py --clean will regenerate my Hugo content + detached “signature view” files from sources/sigs, which is the path of my blog-posts submodule. The --clean flag ensures it deletes the old generated content top level directories and static/sigs/ first to avoid stale output. My ego is telling me to post this script, but I am opting not to this time for the simple fact that it is 300 lines long, and I do not want to attack you with a wall of python.
Running hugo -t risotto will take the files generated by the previous command, and generate the actual files used to run eddiequinn.xyz.
Looking at this one would assume that generating a CICD pipeline would be simple, no? Well… Not entirely. There are some caveats that make it a tad complicated to a novice.
- I have a strange instance of only using SSH with git, and only using HTTPS if I absolutely have to. This mean using deploy keys
- I wanted it to be the case that when I push code to a submodule, it causes the root repo to rebuild the site.
- The site is currently being hosted on github pages, so that meant two different sites that had to have credentials factored in.
What I ended up with was two .gitlab-ci.yml files. I will show you the one in the blog post repo first as that is the smaller of the two.
stages: [trigger]
trigger_site_build:
stage: trigger
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
trigger:
project: my-blog/hugo-site
branch: master
strategy: depend
variables:
SIGS_REF: "$CI_COMMIT_SHA"
This runner performs a single job, if a commit is pushed to the master branch. Its sole purpose is to trigger a pipeline in the root repo on the master branch. It passes the current commit’s SHA as a variable (SIGS_REF) so the downstream project can build against the exact version that changed, and because it uses strategy: depend, it waits for that downstream pipeline to finish and mirrors its success or failure. In short, it acts as a controlled, branch-gated trigger that rebuilds your Hugo site whenever the source repo updates on main, while preserving reproducibility and proper failure propagation.
Now for the big boy
stages: [build]
variables:
GIT_SUBMODULE_STRATEGY: none
build_and_publish:
stage: build
image: alpine:3.20
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
- if: '$CI_PIPELINE_SOURCE == "pipeline"' # allows downstream trigger from sigs
before_script:
- apk add --no-cache git hugo python3 py3-yaml openssh-client ca-certificates
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan -p 2424 gitlab-ssh.eddiequinn.casa >> ~/.ssh/known_hosts
- ssh-keyscan github.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- echo "$GITLAB_SSH_KEY" > ~/.ssh/id_ed25519_gitlab
- echo "$GITHUB_SSH_KEY" > ~/.ssh/id_ed25519_github
- chmod 600 ~/.ssh/id_ed25519_gitlab ~/.ssh/id_ed25519_github
- |
cat > ~/.ssh/config <<'EOF'
Host gitlab-ssh.eddiequinn.casa
HostName gitlab-ssh.eddiequinn.casa
Port 2424
User git
IdentityFile ~/.ssh/id_ed25519_gitlab
IdentitiesOnly yes
Host github.com
HostName github.com
User git
IdentityFile ~/.ssh/id_ed25519_github
IdentitiesOnly yes
EOF
- chmod 600 ~/.ssh/config
- git config --global user.name "${GITHUB_USER}"
- git config --global user.email "${GITHUB_EMAIL}"
script:
- git submodule sync --recursive
- git submodule update --init --recursive
- |
if [ -n "${SIGS_REF}" ]; then
echo "Using sigs commit ${SIGS_REF}"
git -C sources/sigs fetch --all
git -C sources/sigs checkout "${SIGS_REF}"
fi
# Update public/ first (before generating files into it)
- |
cd public
git fetch origin master
git checkout -B master origin/master
git reset --hard origin/master
git clean -fdx
cd ..
# Build writes into ./public
- sh build-site.sh
# Commit + push the new build
- |
cd public
if [ -z "$(git status --porcelain)" ]; then
echo "No changes to publish."
exit 0
fi
git add -A
git commit -m "ci: autobuild site"
git push origin HEAD:master
This pipeline defines a build stage that runs in either of the following scenarios: You push to the master branch directly It’s triggered by another pipeline, for example the one from the previous runner.
The line GIT_SUBMODULE_STRATEGY: none disables GitLab’s automatic submodule handling because I am managing submodules manually inside the job, this is mainly due to the SSH repo constraints I mentioned earlier.
The job runs inside an alpine:3.20 container and installs everything it needs on the fly: git; hugo; python3; py3-yaml; SSH tooling. This fixes the issue I was having with jenkins before. The before_script is entirely about secure Git access. It sets up SSH keys for both your self-hosted GitLab and GitHub, populates known_hosts, configures per-host identities, and sets Git author info from CI variables. In other words, the runner prepares itself to pull submodules from private repos and push the built site somewhere securely.
In the main script section, it first syncs and initializes submodules manually. Then, if the pipeline was triggered downstream and passed a SIGS_REF, it force-checks out that exact commit inside sources/sigs. This ensures the Hugo build uses the precise commit that triggered the upstream pipeline, rather than whatever happens to be current. This ensures a more predictable build process, and predictability is king with automation.
Before building, it resets the public/ directory to exactly match origin/master, wiping any local drift. This is the submodule linked to the github pages repo. Then build-site.sh runs the python script and the hugo build command mentioned earlier. Following this, it checks whether anything actually changed; if not, it exits cleanly. If there are changes, it commits them with ci: autobuild site and pushes to master.
It is clean, it is predictable, and it does the job I need it to do without scruple of diffidence. I can assure you, that the elation that swept over my body when I saw it run for the first time was paramount to the kind of dopamine big pharma wished they could synthesise.
There are improvements I wish to make. Recently I set my PGP key, and with that the majority of the signed posts are now signed by a defunct key. I want to fix this, and want to edit the ci runner in the blog posts repo to check that all the posts have a valid signature before triggering the build process in the root repo. I have a plan to do this soonish, but I am going to wait until I am back home to kick that off - Working from this laptop is beginning to annoy me - I miss my main PC.
Technical and Mental Maturity
I don’t know if you can tell, but this post is not about docker, my homelab, devops or even my blog. It is about growing into technical competence, and understanding that not knowing things at the point you are at is neither a good thing or a bad thing - it just is.
The Mandelbrot set that is “technology” is just that, a fractal. You cannot know everything two weeks, two years, or two decades into your journey. You will make and break things countless times, and that again is not good or bad - it just is. When I got my first tech job, I wanted to learn every programming language and get every certification. Now at the age of 28 as I sit in this hotel sipping possibly the worst coffee I have ever had, I know that chasing these tendrils is a farce. Do what you find interesting, and learn what makes sense there and then. If an idea was meant to break through the barrier from aetherial to tangible, then it will.
With moronic laws like “chat control” and the “online safety act” looming their ugly heads, the ability to take control of your own infrastructure is the difference between walking the road to serfdom and being in control of data you produce. I think I wanted to do a post on these acts when they were happening, but chose not to - I instead chose to act by severing my relationship with platforms that were bending the knee. My friends moved from Discord to The Matrix within 48 hours. If there comes a point where this Britcard Spyware App is put onto my phone, I will make the move to GrapheneOS as I have been meaning to for several years. If the UK government follows through on this unfounded threat to Ban commercial VPN usage, I will host my own VPN server. All of these serve to illustrate the point that projects and their reason for being spin up and spin down based on your environment and needs. You don’t need to implement an idea as soon as you are free to do it, you can shelf it for a better time.
I am under no illusion that at the age of 28 my understanding of the world is likely still laughably narrow. I will not pretend that I am sitting here discussing maturity as if I am an authority figure on it, whilst peppering in references to the mystic and nerdy does not feel like a form of comedy. Ten years from now I may say that my outlook in this post is immature, just as I have been critical of decisions I made in the past in this post. People change with their experiences, and their outlooks change with it: this is not a good thing or a bad thing, it just is.
Verify this post
This page is published as a PGP clearsigned document. You can verify it like this:
gpg --keyserver hkps://keys.openpgp.org --recv-keys CA98D5946FA3A374BA7E2D8FB254FBF3F060B796
curl -fsSL 'https://eddiequinn.xyz/sigs/posts/2026/feb/devops-and-fractal-competence.txt' | gpg --verify