When I started this blog at the end of 2018, I didn’t want to spend much time on the blog’s setup, but rather get started writing as soon as possible. For this reason, I chose a preconfigured VM image with the blogging software Ghost that was offered by my cloud provider. In the meantime, writing has become a hobby to me, even though I keep most of what I write private. I decided to take the time to migrate the blog to a custom setup based on containers now. This post describes the process.

Motivation

Having a replicable setup is very important to me for recovery from catastrophic events. Easily colocating additional applications on the same host was another major motivation. Leveraging containerization has become second nature to me, as such I simply prefer this style of software operations and am experienced with it. Over the course of the migration, I got to know every little detail of my new setup and am confident I can fix any issues. Incidentally, containers made it very easy to upgrade to newer versions of the used software components (Ghost, Nginx, MySQL). Security increased because all the components run as separate containers with resource isolation enforced by the Kernel. Experimenting with Traefik v2+ was also interesting.

Initial setup

The old system served me well and didn’t have any major issues. Nginx served as a reverse proxy that terminated TLS and passed requests to the Ghost blogging software which is based on Node.js. The ACME client acme.sh was started regularly as a cron job in case the Let’s Encrypt Certificates had to be renewed. If the certificate was renewed, acme.sh also triggered an Nginx reload to ensure Nginx used the renewed certificate.

Blog before the changes

Blog before the changes

One thing I didn’t like about this solution was that all files related to different components were spread across the whole file system. Adding additional workloads to this server would have become messy.

New setup

There is one additional component, the reverse proxy Traefik. It terminates TLS and also handles TLS certificate renewal via the ACME protocol. I like Traefik because it can use many different backend types to discover services dynamically. In this case, the backend is Docker. Any service-specific Traefik configuration can be supplied to Traefik through container labels. I kept Nginx for caching and URL rewriting.

Blog after the changes

Blog after the changes

The Docker host is based on a preconfigured VM image provided by my cloud provider. However, I additionally enabled Docker log rotation to prevent issues with a full disk. I describe how to enable log rotation below.

In the diagram, all components in green boxes are containerized. All containers are orchestrated with docker-compose. There is one compose file for Traefik and one for the blog components, Nginx and Ghost. The “Other application” in the diagram is a placeholder for additional workloads that might be added in the future. The docker-compose files and configuration files are located in /srv.

Enabling Docker log rotation

Create /etc/docker/daemon.json with content

1
2
3
4
5
6
7
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "5m",
    "max-file": "3"
  }
}

Restart the Docker Daemon with “sudo systemctl restart docker”.

Traefik docker-compose

I decided to put Traefik into a a separate docker-compose file because it will be used to serve multiple workloads in the future. The “frontend” network must be created manually first with “docker network create frontend”. The Traefik workload and all workloads that will be exposed through Traefik must be connected to this network.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
version: "3.8"

services:

  traefik:
    image: traefik:v2.3.1
    restart: always
    ports:
      # Use unprivileged ports in the container
      - 443:8443
      - 80:8080
    volumes:
      # The docker socket is mounted for auto-discovery of new services
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      # Mount of the traefik config
      - "./config/traefik.yaml:/etc/traefik/traefik.yaml:ro"
      # ACME account file for letsencrypt
      - "acme-data:/data/letsencrypt/"
    cap_drop:
      - ALL
    networks:
      - frontend

volumes:
  acme-data: {}

networks:
  frontend:
    external: true

Traefik config

The above docker-compose mounts a Traefik configuration file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# API definition
# Warning: Enabling API will expose Traefik's configuration.
# It is not recommended in production,
# unless secured by authentication and authorizations
api:
  dashboard: false
  insecure: false
  debug: false

entryPoints:
  # Always redirect port 80 to port 443
  web:
    address: ':8080'
    http:
      redirections:
        entryPoint:
          to: ':443'
  websecure:
    address: ':8443'
    http:
      tls:
        certResolver: leresolver
        domains:
          - main: www.nicktriller.com
            sans: [nicktriller.com,test.nicktriller.com]

providers:
  docker:
    exposedByDefault: false
    network: frontend

certificatesResolvers:
  leresolver:
    acme:
      email: [email protected]
      # Let's encrypt staging server for testing:
      # caServer: https://acme-staging-v02.api.letsencrypt.org/directory
      storage: /data/letsencrypt/acme.json
      httpChallenge:
        # used during the challenge
        entryPoint: web

Blog docker-compose

This docker-compose file contains Nginx, Ghost and MySQL. The Nginx container is annotated with labels that are interpreted by Traefik.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
version: "3.8"

services:

  nginx:
    image: nginxinc/nginx-unprivileged:1.19.4-alpine
    restart: always
    volumes:
      - ./config/nginx/templates:/etc/nginx/templates:ro
      - nginx-cache:/var/cache/nginx
    ports:
      - 8080:8080
    environment:
      BLOG_APEX: "${BLOG_APEX}"
      BLOG_HOSTNAME: "${BLOG_HOSTNAME}"
    cap_drop:
      - ALL
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.ghost.rule=Host(`$BLOG_HOSTNAME`) || Host(`$BLOG_APEX`)"
      - "traefik.http.routers.ghost.entrypoints=websecure"
    networks:
      - backend
      - frontend

  ghost:
    image: ghost:2.38.2-alpine
    restart: always
    environment:
      # see https://docs.ghost.org/docs/config#section-running-ghost-with-config-env-variables
      database__client: mysql
      database__connection__host: db
      database__connection__user: ghost
      database__connection__password: "${DB_PASSWORD}"
      database__connection__database: ghost_production
      url: https://$BLOG_HOSTNAME/blog/
    volumes:
      # Content directory contains themes, images, etc.
      - ghost-data:/var/lib/ghost/content
    cap_drop:
      - ALL
    # Ghost requires CAP_SETGID and CAP_SETUID to switch to an unpriviliged user after chown'ing directories
    cap_add:
      - SETGID
      - SETUID
    networks:
      - backend

  db:
    image: mysql:5.7.31
    restart: always
    environment:
      MYSQL_DATABASE: ghost_production
      MYSQL_ROOT_PASSWORD: "${DB_ROOT_PASSWORD}"
      MYSQL_USER: "ghost"
      MYSQL_PASSWORD: "${DB_PASSWORD}"
      MYSQL_ROOT_USER: "ghost"
    volumes:
      - db-data:/var/lib/mysql
    cap_drop:
      - ALL
    cap_add:
      # Entrypoint script uses find
      - DAC_READ_SEARCH
      # Required to drop to dedicated mysql user
      - SETUID
      - SETGID
    networks:
      - backend

volumes:
  db-data: {}
  ghost-data: {}
  nginx-cache: {}

networks:
  backend: {}
  frontend:
    external: true

Nginx config

The Nginx configuration looks like this. The placeholders, e. g. “${BLOG_APEX}”, are substituted with the corresponding environment variables by an entrypoint script in the Nginx docker image. All assets except for admin and preview pages are cached on the server.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:8m max_size=4g 
                  inactive=60m use_temp_path=off;

# Redirect naked domain to www subdomain
server {
    listen 8080;
    listen [::]:8080;

    server_name ${BLOG_APEX};
    return 302 http://www.$host$request_uri;
}

server {
    listen 8080 default_server;
    listen [::]:8080 default_server;

    server_name ${BLOG_HOSTNAME};

    location /robots.txt {
        return 302 $scheme://$host/blog/robots.txt;
    }

    location / {
        return 302 $scheme://$host/blog;
    }

    # Don't cache preview and admin pages
    location ~ ^/(blog/admin|blog/p|blog/ghost) {
        # don't cache it
        proxy_no_cache 1;
        # even if cached, don't try to use it
        proxy_cache_bypass 1;
        proxy_pass http://ghost:2368; 
    }

    location /blog {
        # For valid responses, cache it for 1 day
        proxy_cache_valid 200 1d;
        # For not found, cache it for 5 minutes
        proxy_cache_valid 404 5m;
        # Use the nginx cache zone called my_cache
        proxy_cache my_cache;

        # Ghost sends cookies and cache headers that breaks the nginx caching, so we have to ignore them
        proxy_ignore_headers "Set-Cookie";
        proxy_hide_header "Set-Cookie";
        proxy_ignore_headers "Cache-Control";
        proxy_hide_header "Cache-Control";
        proxy_hide_header "Etag";

        # Add header for debugging
        add_header X-Cache $upstream_cache_status;

        proxy_pass http://ghost:2368;
    }

    client_max_body_size 50m;
}

Creating a backup

There are two stateful components. Obviously, the MySQL instance contains state. Additionally, Ghost stores themes and images that are embedded in posts on the filesystem. I already had a script that backed up all data regularly. To prepare the migration, I simply ran my backup script. It creates a archive containing a .sql file with a database dump and the relevant Ghost directories. Somewhere below, you can find a version of this backup script that was adapted to the containerized setup.

Restoring the backup and fixing database users

The migration process consisted of theses steps:

  1. Create new VM and configure it.
  2. Clone the docker-compose file onto the VM.
  3. Run a script that restores the backup and fixes inconsistencies.
  4. Change DNS to point at the new IP.
  5. Delete the old VM.

This is the script that restored the backup on the new VM. Ghost uses a named volume for all important data. To restore the backup, I create a temporary container that mounts the named volume and use “docker cp” to place the files into the volume. The temporary container doesn’t have to be started at all. Afterwards, the temporary container is deleted. To restore the database dump, the sql dump is streamed into stdin of the mysql command in the MySQL container.

The database user permissions of the backup reflect the previous host based setup. For example, the root user can only be used from localhost and the root user has no password set. The script also fixes the MySQL users to work in a containerized setup.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
#!/bin/bash
set -euo pipefail

#################################################################
# Prepare
#################################################################

# Contains ROOT_PASS
source 99_env-vars.sh
# Create external docker network, ignore error in case it already exists
docker network create frontend || true
# Create named volume as referenced in docker-compose, ignore error in case it already exists
docker volume create blog_ghost-data


#################################################################
# Restore ghost content directory (custom theme, images, etc.)
#################################################################
docker container create --name restore-into-named-volume -v blog_ghost-data:/data hello-world
# *.tar files are automatically expanded if piped into "docker cp"
cat backup/archive-backup-2020-10-17.tar | docker cp - restore-into-named-volume:/data
docker rm restore-into-named-volume


#################################################################
# Restore mysql dump
#################################################################
# Start db
docker-compose --env-file ../blog/.env -f ../blog/docker-compose.yaml up -d db
sleep 10
# Restore dump
cat ./backup/mysql-backup-2020-10-17.sql | docker-compose -f ../blog/docker-compose.yaml exec -T db mysql -uroot -p$ROOT_PASS
# Restart mysql to apply the rights of the restored users
docker-compose -f ../blog/docker-compose.yaml restart db
sleep 10


#################################################################
# Fix mysql users
#################################################################
function db_exec {
  docker-compose -f ../blog/docker-compose.yaml exec -T db mysql -uroot -e "$1"
}

# Delete debian-sys-maint user
echo "Drop debian-sys-maint user"
db_exec "DROP USER IF EXISTS 'debian-sys-maint'@'localhost';"

# Set password for root user
echo "Set password for root user"
db_exec "UPDATE mysql.user SET authentication_string=PASSWORD('$ROOT_PASS') WHERE user='root';"
db_exec "UPDATE mysql.user SET plugin='mysql_native_password' WHERE user='root';"

# Allow ghost user to login and access the db from any host
echo "Change host of ghost user" 
db_exec "UPDATE mysql.db SET Host='%' WHERE Host='localhost' AND User='ghost';"
db_exec "UPDATE mysql.user SET Host='%' WHERE Host='localhost' AND User='ghost';"

# (Re)start mysql, ghost and traefik
echo "Restart mysql and ghost"
docker-compose -f ../blog/docker-compose.yaml restart db
docker-compose --env-file ../blog/.env -f ../blog/docker-compose.yaml up -d --force-recreate ghost
docker-compose -f ../traefik/docker-compose.yaml up -d

New backup script

Going forward, I would like to continue creating backups regularly. I adjusted my existing backup script to work with the containerized setup. The script creates a directory /root/backup on the target machine, deleting it first in case it already exists. All data is dumped into this directory and packed into a compressed archive. Finally, the backup archive is transfered to the machine that ran the backup script via scp.

blog-backup.sh

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
#!/bin/bash
LOCAL_BACKUP_DIR="data/$(date +"%F_%H-%M-%SZ")"
SSH_CONFIG=blog

# Create backup
ssh $SSH_CONFIG "bash -s" < remote-cmds.sh

# Copy backup
mkdir -p "$LOCAL_BACKUP_DIR"
scp -r blog:/root/backup/* "$LOCAL_BACKUP_DIR"

remote-cmds.sh

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
#!/bin/bash
# Name: remote-cmds.sh
# Purpose: Create backup of ghost 
# ----------------------------------------------------

# Contains DB_ROOT_PASSWORD
source /srv/blog/.env
BACKUP_DIR=/root/backup
MYSQL_USER=root
# (Re)create backup directory
rm -rf $BACKUP_DIR
mkdir $BACKUP_DIR
# Create file archive
docker cp blog_ghost_1:/var/lib/ghost/content - > "$BACKUP_DIR/archive-backup-$(date +%F).tar"
# Create database dump
docker-compose -f /srv/blog/docker-compose.yaml exec -e MYSQL_PWD=$DB_ROOT_PASSWORD -T \
  db mysqldump --all-databases --single-transaction --user root \
  > $BACKUP_DIR/mysql-backup-$(date +%F).sql

docker-compose entropy issues

Sometimes, when executing docker-compose commands, docker-compose would hang for minutes at a time without doing anything, even when only running “help”. It turned out the problem was a lack of entropy. docker-compose waited until enough entropy was available to generate some random data. A VM doesn’t have a lot of sources to generate entropy ; there is no keyboard, mouse and so on. Installing Haveged fixed the problem.  Haveged harvests the indirect effects of hardware events on hidden processor state (caches, branch predictors, memory translation tables, etc) to generate a random sequence. I found the solution in this GitHub Issue: https://github.com/docker/compose/issues/6678

Line endings in .env files

I stumbled over a rookie mistake in the process of the migration. The .env files contained Windows line endings, that is “\r\n”, instead of Linux line endings, “\n”. docker-compose seemingly interpreted the windows line endings correctly, but my backup script failed to authenticate with MySQL because the password contained a “\r” at the end. Printing out the environment variables didn’t help as the “\r” wasn’t visible. Running the script in bash trace mode using “set -x” resolved my confusion.

Conclusion

The migration went very smooth for the most part. Embarassingly, fixing the broken line endings in the .env files was the part that cost me the most time. In any case, I am happy with the result. I can easily colocate additional workloads like Grafana or Prometheus on this new machine. For my private experiments, a single-host setup is completely sufficient.