Hello, and welcome to the SpecTrust engineering blog!
This introductory post both serves as a “Hello, World” to ensure that we’ve got some content up before we start posting more stuff, and also as an opportunity to describe the way this blog is run.
About SpecTrust Engineering
SpecTrust’s mission is to unify the fight against cybercrime. We believe that engineers’ time is better spent driving value in their business’ domain than constantly trying to play catchup with fraudsters, so we provide a no-code platform that allows anyone to get deep insight into the legitimate and illegitimate traffic patterns on their site and then to build powerful workflows to tag, reroute, deny, or otherwise quash the illegitimate ones.
We’ve got a strong team of engineers working on the Rust platform services that drive data collection and enable fraud mitigation, the node/react/typescript hub services that aggregate that data and provide a unified user experience, and the terraform-driven instructure that pins it all together. We’re a distributed team, with engineers in Austin, Michigan, Oklahoma City, Phoenix, and San Jose.
About this Blog
We agree with Dan Luu that reducing friction is key to creating a good engineering blog1. With that in mind, we tried to design a system that is as easy as possible for engineers while also providing checkpoints for approvals as we find we need them.
Tech Stack
The blog is a static site, generated by Hugo. Hugo is a single-binary go
package. While we allow engineers at SpecTrust to set up their environments
however they want, we always provide the necessary configs for an isolated,
nix-driven local development environment, along with a Makefile to provide
semantically meaningful entrypoints. So, in this case, we install Hugo with a
simple make setup
, which uses our nix configs to ensure that every engineer
has the exact same version of it and any other dependencies.
From there, it’s pretty much straight down the fairway. We create a new post
with hugo new posts/<title>.md
, write some content in whatever our favorite
editor happens to be, commit it, and open a merge request in GitLab. The GitLab
CI process handles building the site and publishing it via GitLab Pages.
Nix
I’m sure we’ll have more posts on nix, but here’s a brief description of how we’re using it for this simple project.
Nix allows us to install and activate native packages only while working on a
project, keeping the rest of the system untouched. We define the environment in
a shell.nix
file at the project root. That file looks like this:
# Allow passing pkgs manually, otherwise use versions pinned by niv in sources.nix
{ pkgs ? import (import ./nix/sources.nix).nixpkgs {} }:
# "unpack" the properties of the pkgs attribute set into the current namespace.
with pkgs;
mkShell {
buildInputs = [
bashInteractive # standard version of bash
direnv # automatic env sourcing
hugo # the site generator itself
git # used for theme submodule management
gnugrep # standard version of grep
gnumake # standard version of make
niv # update and add nix channels
nix-direnv # env caching
];
}
Nix is a functional language, so everything is an expression. Nix files are
therefore always expressions. In this case, our expression is a function
(general form argument: body
) that takes an “attribute set” (what other
languages call a hash map or a dictionary or an object or an associate array or
…) and returns the result of calling the mkShell
function. The buildInputs
argument to mkShell
determines which packages will be available. The set of
packages here is everything we need for local development.
While we’re pretty excited about the upcoming flakes feature in nix for
pinning versions of nixpkgs (which is the set of available packages) and other
dependencies, flakes are still experimental and require some fiddling with
configs, which we don’t want our engineers to have to deal with if they don’t
want to. So, we use niv to pin our dependencies. Our shell.nix
function will
automatically use those pinned sources, but also allows providing a different
package set if desired. Upgrading is as simple as running niv update
.
Once this is in place, we use the use nix
function from direnv and the extra
caching layer from nix-direnv
via an .envrc
file that looks like:
# Ensure that nix-shell packages and environment variables are set in the environment
if [[ $(command -v "nix-env") != "" ]]; then
# Use the `use nix` command from nix-direnv if available.
# nix-direnv caches the shell definition so it doesn't need to
# be calculated every time, significantly speeding up execution
# of the .ennvrc file. This assumes that nix-direnv has been
# installed via `nix-env` (via `make setup`). If you're using
# NixOS or have installed nix-direnv another way, feel free
# to add in checks for config files that might live elsewhere.
if [[ -e ~/.nix-profile/share/nix-direnv/direnvrc ]]; then
source ~/.nix-profile/share/nix-direnv/direnvrc
fi
# fall back to using direnv's builtin nix support
# to prevent bootstrapping problems.
use nix
fi
With that, as long as an engineer has direnv
installed, whenever they are
working on this project, they will automatically be using all of the nix
dependencies.
This is a simple example, but we also use this approach in our much more complicated monorepo, so we can confirm that it scales well.
GitLab Deploy
GitLab makes deploying wildly simple. Our .gitlab-ci.yml
file looks like this:
# Use an image with nix pre-installed
image: nixos/nix
variables:
GIT_SUBMODULE_STRATEGY: recursive
cache:
# cache the nix store for speed
paths:
- /nix/store
pages:
script:
# Run `make build` in the nix shell (which has all of our nix-defined
# requirements avialable). This will create the static website at ./public/,
# which will be automatically served by GitLab when the artifact from this
# build is published.
- nix-shell --run 'make build'
artifacts:
paths:
- public
only:
- master
As soon as a commit is merged into master
, the CI process kicks off, and the
generated output is published.
Publishing Content
Any engineer can clone this repo, install everything with make setup
, and be
writing a post within minutes. Hugo provides the ability to write “draft” posts,
which will show up in our local environment but not in production, allowing
engineers to check in partial work in order to save their progress or get early
feedback. When a post is ready for approval, a merge request is opened, and we
can share that merge request with whoever needs to check it before it goes out.
Once the approvals are in, we hit merge, and then we’re done.
Hugo automatically creates an RSS feed of all posts, and the theme we’re using provides some nice extra features like tags, searching, and an archive view.
Summary
We wanted to build a process for writing blog content that was as easy and pleasant as possible for our engineers. We utilized our previous experience setting up standardized developer environments with nix to get things running quickly. Finally, GitLab gave us a no-hassle way to publish the generated site. The result is a blog platform that any engineer can contribute to via a standard merge request.
We hope this process will be easy and friendly enough that we’ll be able to post content here regularly, and we hope that you will enjoy it!