Uptime Kuma, SQLite and Litestream!

For a long time, we have been relying on a very KISS software, Monit, and even if it was a very good one, we were facing multiple issues:

  • no public status page
  • notification system was based on Shell Scripts
  • design was very ugly

Here comes the boom!

Uptime Kuma shocked the world of self hosted monitoring for multiple reasons:

  • Very refreshing design
  • Fully websocket based, so some nice real-time feedback
  • SQLite based

The last point, that most people loved, was, IMHO, the biggest disaster: no cloud providers are providing SQLite as a service!

Moreover, some people are sure that using "traditional" databases are useless (https://github.com/louislam/uptime-kuma/issues/953#issuecomment-984718993), and I saw already a message saying: "Just launch a k8s, and it's magic, it will work!"!

First of all: NO! Second, Are you sure it's great idea?

Persistent Disks in the cloud

How many times have I seen this issue? Too many times, my soul is tainted by it...

But which issue, Loic?

NFS on the cloud!

How do you share a PHP code base between multiple EC2 instances? NFS of course (EFS on AWS)!

How do you share logs between your servers? Same answer!

So how do you make SQLite being backed up (sic)? EFS, of course!

Filesystems, Posix and ...

To understand why it's a bad idea, we should go back to the origins: what is a filesystem?

In computing, file system or filesystem (often abbreviated to fs) is a method and data structure that the operating system uses to control how data is stored and retrieved.[1] Without a file system, data placed in a storage medium would be one large body of data with no way to tell where one piece of data stopped and the next began, or where any piece of data was located when it was time to retrieve it.

On Linux, MacOS and most BSDs, the filesystem is POSIX (or at least compliant). What does it mean?

  • Strong consistency: if a write occurs, a read must return the data written.
  • Atomic writes, in which a read returns all data written by a write or none of the data, but never an incomplete write.
  • Control access to files using various permissions, including file locking.

As you can see, great features so far, but guess what happened, if you have:

  • multiple servers trying to access the same file?
  • some network latency?

The answer is simple, you lose a lot of time, on the FS level, or you are facing a concurrency access problem (the famous Dining Philosophers Problem)!

Solution

Don't use NFS, use an Object Storage, Like S3 (or a real database).

But how can I use S3 for Uptime Kima? I cannot! Not as it is, because SQLite cannot work on top of an object storage (But maybe it could work on top of FoundationDB, right Pierre Zemb?)

Dumb and Dumber

Out of curiosity, I tried to deploy Uptime Kuma on Clever Cloud with a NFS mount, and how to say it in a few nice words? Oh yeah, it was a f****ng disaster! Issue with SQLite, lost of sync, lost of data, etc...

Knight Litestream, at your service!

What is Litestream? Litestream is a streaming solution for SQLite, written in Go, using SQlite WAL, which can use a lot of different storage to not only stream, but also backup, and recover!

How does it help? it covers the biggest problem of SQLite: disaster recovery. If your server dies, so does your data.

Its creator Ben Johnson, wrote a very nice article why he built Litestream, that I recommend to read: https://litestream.io/blog/why-i-built-litestream/!

How does Litestream works? Litestream performs asynchronous replication which means that changes are replicated out-of-band from the transaction that wrote the changes. This is how many replication tools work including PostgreSQL’s log-shipping replication. Asynchronous replication is typically much faster than synchronous replication but it trades off durability of recent writes.

By default, Litestream will replicate new changes to an S3 replica every second. During this time where data has not yet been replicated, a catastrophic crash on your server will result in the loss of data in that time window.

Furthermore, it also implement snapshot system, by default every 24h, which is going to accelerate the restoring process if you need it!

And the best, it is compatible with S3-like systems! Who else is compatible with S3? Clever Cloud!

So it's time to deploy!

Clever Cloud + Uptime Kuma + Litestream

First, fork https://github.com/KalvadTech/uptime-kuma-clevercloud for your organization!

Then go to clever cloud, and create an add-on Cellar S3 Storage:

On the next screen, don't link it to any application for the moment, and give it a name (uptime-kuma-s3 for example).

Once the S3-like is deployed, create a bucket, and gather the required information:

  • Key ID
  • Key Secret
  • Bucket name

Store this information.

Then it' s time to deploy uptime-kuma by itself!

Create an application, and link it to the fork of the repo defined earlier.

Then select a type NodeJS, keep a XS instance (1c 1GB Ram), then give it a name, and don't add any addon!

Finally, the most important screen: the Environment variables.

Add the following variables:

  • LITESTREAM_BUCKET: should be the name of your bucket
  • CC_RUN_COMMAND: "bash run.sh"
  • CC_WORKER_COMMAND: "bash worker.sh"
  • CC_WORKER_RESTART: "always"

Don't forget to click update, then click next.

Finally, go to your new application, click on Service Dependencies and add the previously created Cellar!

Under the hood!

That's great, I explained how to deploy, but knowing the trick is always better!

The repo contains 3 scripts:

build.sh

#!/usr/bin/env bash
set -e
set -x
[ -z "$APP_HOME" ] && export APP_HOME=$(pwd)
[ -z "$UPTIME_KUMA_VERSION" ] && export UPTIME_KUMA_VERSION="1.15.1"
[ -z "$LITESTREAM_VERSION" ] && export LITESTREAM_VERSION="0.3.8"
[ -z "$LITESTREAM_BUCKET" ] && export LITESTREAM_BUCKET="uptime-kuma"

rm -rf $APP_HOME/uptime-kuma*
wget -O uptime-kuma-$UPTIME_KUMA_VERSION.tar.gz https://github.com/louislam/uptime-kuma/archive/refs/tags/$UPTIME_KUMA_VERSION.tar.gz
tar xvzf uptime-kuma-$UPTIME_KUMA_VERSION.tar.gz

rm -rf $APP_HOME/litestream*
wget https://github.com/benbjohnson/litestream/releases/download/v$LITESTREAM_VERSION/litestream-v$LITESTREAM_VERSION-linux-amd64-static.tar.gz
tar xvzf litestream-v$LITESTREAM_VERSION-linux-amd64-static.tar.gz
cd uptime-kuma-$UPTIME_KUMA_VERSION
npm ci --production && npm run download-dist

mkdir -p $APP_HOME/fs

build.sh is going to download what you need to operate:

  • Uptime Kuma, you can specify the version as an env variable UPTIME_KUMA_VERSION
  • Litestream, you can specify the version as an env variable LITESTREAM_VERSION

it is going to download both, then, extract then, finally go inside the uptime-kuma folder and do a npm ci --production && npm run download-dist.

run.sh

#!/usr/bin/env bash
set -e
set -x
[ -z "$APP_HOME" ] && export APP_HOME=$(pwd)
[ -z "$UPTIME_KUMA_VERSION" ] && export UPTIME_KUMA_VERSION="1.15.1"
[ -z "$LITESTREAM_VERSION" ] && export LITESTREAM_VERSION="0.3.8"
[ -z "$LITESTREAM_BUCKET" ] && export LITESTREAM_BUCKET="uptime-kuma"
[ -z "$DATA_DIR" ] && export DATA_DIR="$APP_HOME/fs/"


cat <<EOF > $APP_HOME/litestream.yml
dbs:
  - path: ${DATA_DIR}kuma.db
    replicas:
      - type: s3
        bucket: $LITESTREAM_BUCKET
        path: db
        endpoint: $CELLAR_ADDON_HOST
        region: fr-par
        access-key-id: $CELLAR_ADDON_KEY_ID
        secret-access-key: $CELLAR_ADDON_KEY_SECRET
EOF
rm -rf $DATA_DIR
mkdir -p $DATA_DIR
./litestream restore -config litestream.yml ${DATA_DIR}kuma.db
cd uptime-kuma-$UPTIME_KUMA_VERSION
node server/server.js

What is going to be executed!

it is going to generate the Litestream configuration file, then delete the database folder, and restore the latest version from the S3.

Then it's going to launch uptime-kuma!

worker.sh

#!/usr/bin/env bash
set -e
set -x
[ -z "$APP_HOME" ] && export APP_HOME=$(pwd)
[ -z "$UPTIME_KUMA_VERSION" ] && export UPTIME_KUMA_VERSION="1.15.1"
[ -z "$LITESTREAM_VERSION" ] && export LITESTREAM_VERSION="0.3.8"
[ -z "$LITESTREAM_BUCKET" ] && export LITESTREAM_BUCKET="uptime-kuma"
[ -z "$DATA_DIR" ] && export DATA_DIR="$APP_HOME/fs/"


cat <<EOF > $APP_HOME/litestream.yml
dbs:
  - path: ${DATA_DIR}kuma.db
    replicas:
      - type: s3
        bucket: $LITESTREAM_BUCKET
        path: db
        endpoint: $CELLAR_ADDON_HOST
        region: fr-par
        access-key-id: $CELLAR_ADDON_KEY_ID
        secret-access-key: $CELLAR_ADDON_KEY_SECRET
EOF
./litestream replicate -config litestream.yml

Here is the trick, thanks Clever Cloud for providing this feature: it will launch a worker, dedicated to syncing SQLite to Cellar!

If you have a problem and no one else can help, maybe you can hire the Kalvad-Team.