I recently deployed a coffee review app (Next.js frontend, Supabase backend) entirely on a single k3s node with a full CI/CD pipeline. Push code, tests run, image builds, app deploys. No cloud services, no monthly bills. This post walks through every step and every gotcha.

If you have never touched Kubernetes, think of it like building a restaurant. Your server is the building. k3s is the restaurant manager who decides which chef works which station. Your app containers are the chefs. And the CI/CD pipeline is the hiring process that trains, vets, and assigns new chefs automatically whenever you update the menu. You do not need to micromanage any of it.

The Architecture

Here is the full picture. A user opens a browser, hits the frontend. The frontend talks to Supabase through Kong (an API gateway). Supabase is really just a collection of services sitting in front of PostgreSQL. Forgejo hosts the code and container images, and its built-in CI runner builds and deploys everything automatically on each push.

git push | v [ Forgejo Actions CI ] | 1. build Docker image | 2. push to Forgejo registry | 3. update k3s deployment v [ k3s cluster ] | |– [ Next.js Frontend ] —–> MetalLB IP :80 | |– [ Kong API Gateway ] —–> MetalLB IP :8000 | | | | v v | [ PostgREST ] [ GoTrue ] | (REST API) (Auth) | | | | v v |– [ PostgreSQL 17 ] | (your data) | |– [ Forgejo ] —–> MetalLB IP :80 (git + registry + CI)

Think of Kong as the front desk of a hotel. You do not walk directly into the kitchen (PostgreSQL). You tell the front desk what you want, and they route your request to the right department: PostgREST for data, GoTrue for login and signup.

Why self-host Supabase? Supabase Cloud’s free tier has a 500MB database limit and 1GB storage limit. If you are building a scraping pipeline or letting users upload images, you will blow past those limits fast. Self-hosting costs you nothing beyond electricity and hardware you already own.

Prerequisites

This guide assumes you have a working k3s node. Specifically:

  • A server (VM or bare metal) running Ubuntu 24.04
  • k3s installed and running (kubectl get nodes shows Ready)
  • Traefik and servicelb disabled at install (we use MetalLB instead)
  • A config file at /etc/rancher/k3s/config.yaml
  • SSH key auth configured (no password login)

If starting from scratch, install k3s with a config like this:

# /etc/rancher/k3s/config.yaml
node-name: "k3s-node01"
write-kubeconfig-mode: "0644"
data-dir: "/mnt/k3s-data/k3s"
default-local-storage-path: "/mnt/k3s-data/local-storage"
cluster-cidr: "10.42.0.0/16"
service-cidr: "10.43.0.0/16"
flannel-backend: "vxlan"
disable:
  - traefik
  - servicelb
secrets-encryption: true
Gotcha: config.yaml vs service file flags If you pass flags in both the systemd service file AND config.yaml, they merge and duplicate. Pick one source of truth. Use config.yaml and keep the service file as just ExecStart=/usr/local/bin/k3s server with no flags.
Gotcha: secrets-encryption must be enabled from first boot If you try to enable it on an existing cluster using k3s secrets-encrypt enable, it can fail silently or generate a config with no actual encryption key. The cleanest path is including secrets-encryption: true in config.yaml before the first start. If you already have a cluster without it, do a clean reinstall: uninstall k3s, delete the data directory, add the flag, start fresh. On a nearly empty cluster this takes 2 minutes.

k3s Housekeeping

Before deploying anything, harden the node. 10 minutes now prevents headaches later.

SSH Key Auth

Generate a key on your workstation (ssh-keygen -t ed25519), copy it to the node (ssh-copy-id user@your-node), verify you can log in without a password, then disable password auth:

sudo sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
sudo systemctl restart ssh

Always keep an existing terminal open when changing SSH config. That open session is your lifeline if you lock yourself out.

Stable IP

Your k3s node needs a fixed IP. Set a DHCP reservation in your router mapping the node’s MAC address to a specific IP, or configure a static IP inside Ubuntu. If the IP changes, everything downstream breaks.

Disable Swap

sudo swapoff -a
sudo sed -i '/swap/d' /etc/fstab
Why disable swap? Kubernetes manages memory for pods. When the OS swaps pod memory to disk, it confuses Kubernetes’ eviction logic and causes unpredictable behavior. If you have plenty of RAM, there is zero reason to keep it.

Backup

If you are on Proxmox, set up a VM backup before deploying workloads. A weekly snapshot with ZSTD compression gives you a rollback point. For a mostly-empty VM, a compressed backup is only a few gigabytes.

MetalLB: Giving Services Real IPs

In the cloud, when you create a Kubernetes Service of type LoadBalancer, AWS or GCP assigns a public IP automatically. On bare metal, nothing does that. MetalLB fills the gap.

Think of MetalLB as a receptionist handing out phone extensions. When a new service says “I need to be reachable,” MetalLB assigns it an IP from a pool you define.

Carve Out an IP Range

Pick a range on your local subnet that does not overlap with your DHCP pool. If your DHCP hands out .100-.199, give MetalLB .200-.250.

Install MetalLB

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.9/config/manifests/metallb-native.yaml

kubectl wait --namespace metallb-system \
  --for=condition=ready pod \
  --selector=app=metallb \
  --timeout=90s

Configure the IP Pool

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default-pool
  namespace: metallb-system
spec:
  addresses:
    - 192.168.1.200-192.168.1.250  # adjust for your subnet
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default-l2
  namespace: metallb-system
spec:
  ipAddressPools:
    - default-pool

L2 mode means MetalLB responds to ARP requests on your local network. Simple, no BGP routers needed, perfect for a home setup.

Forgejo: Your Own Git Server

Forgejo is a lightweight, open-source Git server (a fork of Gitea) with a built-in container registry and CI system. We deploy it on k3s so we have a place to push code, store Docker images, and run CI pipelines without depending on GitHub.

The deployment has two parts: a Postgres database for Forgejo’s internal data, and Forgejo itself. Both get persistent volume claims so data survives pod restarts.

# Forgejo's Postgres
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: forgejo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: forgejo-postgres
  template:
    metadata:
      labels:
        app: forgejo-postgres
    spec:
      containers:
        - name: postgres
          image: postgres:16-alpine
          env:
            - name: POSTGRES_USER
              value: forgejo
            - name: POSTGRES_PASSWORD
              value: <your-password>
            - name: POSTGRES_DB
              value: forgejo
          ports:
            - containerPort: 5432
          volumeMounts:
            - name: data
              mountPath: /var/lib/postgresql/data
              subPath: pgdata
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: forgejo-postgres

Give Forgejo a LoadBalancer service and MetalLB assigns it an IP. Open that in a browser, run through the setup wizard, and you have a local GitHub.

Gotcha: SSL setting in Forgejo’s install wizard The database SSL dropdown defaults to “Require.” If your Postgres container does not have SSL configured (it almost certainly does not), the install will fail with “password authentication failed.” Change SSL to “Disable.”
Tip: Enable push-to-create By default, Forgejo requires you to create a repository in the web UI before pushing. For an automated workflow, add ENABLE_PUSH_CREATE_USER = true under [repository] in Forgejo’s app.ini. Then git push to a non-existent repo auto-creates it.

Deploying Supabase Postgres

Supabase uses a special PostgreSQL image (supabase/postgres) pre-loaded with extensions like pgvector, pg_graphql, and PostGIS. This is separate from Forgejo’s Postgres. Deploy it in a supabase namespace.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: supabase
spec:
  replicas: 1
  selector:
    matchLabels:
      app: supabase-db
  template:
    metadata:
      labels:
        app: supabase-db
    spec:
      containers:
        - name: postgres
          image: supabase/postgres:17.6.1.075
          env:
            - name: POSTGRES_PASSWORD
              value: <your-db-password>
            - name: POSTGRES_DB
              value: postgres
          ports:
            - containerPort: 5432
          volumeMounts:
            - name: data
              mountPath: /var/lib/postgresql/data
              subPath: pgdata
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: supabase-db
Gotcha: Do not override the entrypoint command You might add command: ["postgres", "-c", "config_file=..."] like some docs suggest. Do not do this with the Supabase PG17 image. The image’s entrypoint handles the root-to-postgres user switch internally. Overriding it causes Postgres to crash with “data directory has wrong ownership.” Let the entrypoint do its thing.
Gotcha: The default user is not “postgres” The Supabase image creates supabase_admin as the superuser, not postgres. Connect with: psql "postgresql://supabase_admin:<password>@localhost:5432/postgres"
PG15 vs PG17 Supabase supports both. PG17 drops pgjwt and timescaledb. If your backup references these, strip those lines (sed -i '/CREATE EXTENSION.*pgjwt/d' backup.sql) or use the PG15 image.

Restoring Your Database

A fresh Supabase Postgres image is missing the roles and schemas that Supabase Cloud had. Create them before restoring a backup.

Create Roles and Schemas

-- Schemas
CREATE SCHEMA IF NOT EXISTS auth;
CREATE SCHEMA IF NOT EXISTS storage;
CREATE SCHEMA IF NOT EXISTS extensions;

-- Core roles
CREATE ROLE postgres WITH LOGIN SUPERUSER PASSWORD '<password>';
CREATE ROLE anon NOLOGIN NOINHERIT;
CREATE ROLE authenticated NOLOGIN NOINHERIT;
CREATE ROLE service_role NOLOGIN NOINHERIT BYPASSRLS;

-- PostgREST connector
CREATE ROLE authenticator WITH LOGIN PASSWORD '<password>' NOINHERIT;
GRANT anon TO authenticator;
GRANT authenticated TO authenticator;
GRANT service_role TO authenticator;

-- Service-specific roles
CREATE ROLE supabase_auth_admin WITH LOGIN PASSWORD '<password>' NOINHERIT;
GRANT ALL ON SCHEMA auth TO supabase_auth_admin;
GRANT USAGE, CREATE ON SCHEMA public TO supabase_auth_admin;

CREATE ROLE supabase_storage_admin WITH LOGIN PASSWORD '<password>' NOINHERIT;
GRANT ALL ON SCHEMA storage TO supabase_storage_admin;

GRANT USAGE ON SCHEMA public TO anon, authenticated, service_role;
ALTER SCHEMA public OWNER TO postgres;
Why all these roles? Supabase uses PostgreSQL’s Row Level Security (RLS) heavily. anon is for unauthenticated API users. authenticated is for logged-in users. service_role bypasses RLS for server-side operations. authenticator is what PostgREST connects as, switching to the appropriate role based on the JWT in each request. Think of it like a bouncer at a club who checks your ID and gives you a different wristband depending on your ticket type.

Restore and Grant

# Copy backup into the pod
kubectl cp backup.sql supabase/<pod-name>:/tmp/backup.sql

# Restore
kubectl exec -n supabase deploy/postgres -- \
  psql "postgresql://supabase_admin:<password>@localhost:5432/postgres" \
  -f /tmp/backup.sql

You will see hundreds of errors for missing roles (role "postgres" does not exist). This is expected. The tables, indexes, and data all loaded. The errors are GRANT statements that reference roles that did not exist at dump time. After creating the roles, fix permissions:

GRANT ALL ON ALL TABLES IN SCHEMA public TO postgres, anon, authenticated, service_role;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO postgres, anon, authenticated, service_role;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO anon, authenticated, service_role;
Tip: heredocs and kubectl exec do not mix Multi-line SQL piped via heredoc into kubectl exec often gets mangled by the shell. Write your SQL to a file, kubectl cp it into the pod, and run with -f. Boring but reliable.

Deploying the Supabase API Layer

Supabase is not just Postgres. At minimum you need three services for a working API:

Service Image What It Does
PostgRESTpostgrest/postgrest:v12.2.3Turns your Postgres tables into a REST API automatically
GoTruesupabase/gotrue:v2.158.1Handles user signup, login, JWT issuance
Kongkong:3.5API gateway that routes /rest/v1/ to PostgREST and /auth/v1/ to GoTrue

PostgREST is a translator. Your app speaks HTTP (“give me all coffees rated above 4 stars”). PostgREST translates that into SQL and hands back JSON. GoTrue is the bouncer: checks passwords, issues JWT tokens, tells PostgREST what role each request runs as.

Kong is the front door. It listens on port 8000 via a LoadBalancer service and routes requests to the right backend. Its config is a YAML file mounted as a ConfigMap:

_format_version: "2.1"
_transform: true
services:
  - name: rest-v1
    url: http://rest:3000/
    routes:
      - name: rest-v1
        strip_path: true
        paths:
          - /rest/v1/
    plugins:
      - name: cors
  - name: auth-v1
    url: http://auth:9999/
    routes:
      - name: auth-v1
        strip_path: true
        paths:
          - /auth/v1/
    plugins:
      - name: cors
Gotcha: GoTrue needs CREATE on public schema GoTrue runs database migrations on startup. If supabase_auth_admin only has permissions on the auth schema, it crashes with “permission denied for schema public.” Grant USAGE, CREATE ON SCHEMA public TO supabase_auth_admin before starting GoTrue.

JWT Tokens: The Handshake

Every Supabase service shares a JWT secret. This is the password they all use to verify that a request is legitimate. The flow:

  1. Your frontend sends Authorization: Bearer <token>
  2. Kong passes it to PostgREST
  3. PostgREST verifies the signature using the shared secret
  4. PostgREST reads the role claim (anon or authenticated)
  5. PostgREST runs the SQL as that role, so RLS policies apply

Generate a secret and two tokens:

# Plain hex string -- important, not base64
openssl rand -hex 32
import hmac, hashlib, base64, json

secret = "your-hex-secret-here"

def make_jwt(payload, secret):
    header = base64.urlsafe_b64encode(
        json.dumps({"alg":"HS256","typ":"JWT"}).encode()
    ).rstrip(b'=').decode()
    body = base64.urlsafe_b64encode(
        json.dumps(payload).encode()
    ).rstrip(b'=').decode()
    sig_input = f"{header}.{body}"
    sig = base64.urlsafe_b64encode(
        hmac.new(secret.encode(), sig_input.encode(), hashlib.sha256).digest()
    ).rstrip(b'=').decode()
    return f"{header}.{body}.{sig}"

anon = make_jwt({"role":"anon","iss":"supabase",
    "iat":1735689600,"exp":2051222400}, secret)
service = make_jwt({"role":"service_role","iss":"supabase",
    "iat":1735689600,"exp":2051222400}, secret)
print(f"ANON_KEY={anon}")
print(f"SERVICE_ROLE_KEY={service}")

Store the secret and tokens in a Kubernetes Secret so all services reference them.

Gotcha: base64-encoded secrets vs plain strings If you generate your secret with openssl rand -base64 32, PostgREST uses that string as-is for HMAC verification. But many JWT libraries base64-decode the secret first before signing. The signatures will not match and you get JWSError JWSInvalidSignature. Use openssl rand -hex 32 for a plain string. No encoding ambiguity.

Containerizing the Frontend

The coffee review app is a Next.js app with Bun as the package manager. Containerizing it requires a multi-stage Docker build:

FROM oven/bun:1 AS deps
WORKDIR /app
COPY package.json bun.lock ./
COPY frontend/package.json ./frontend/
RUN bun install
RUN cd frontend && bun install

FROM oven/bun:1 AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=deps /app/frontend/node_modules ./frontend/node_modules
COPY . .
ARG NEXT_PUBLIC_SUPABASE_URL
ARG NEXT_PUBLIC_SUPABASE_ANON_KEY
ENV NEXT_PUBLIC_SUPABASE_URL=$NEXT_PUBLIC_SUPABASE_URL
ENV NEXT_PUBLIC_SUPABASE_ANON_KEY=$NEXT_PUBLIC_SUPABASE_ANON_KEY
RUN cd frontend && ./node_modules/.bin/next build

FROM node:20-slim AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY --from=builder /app/frontend/.next/standalone ./
COPY --from=builder /app/frontend/.next/static ./frontend/.next/static
EXPOSE 3000
CMD ["node", "frontend/server.js"]
Gotcha: NEXT_PUBLIC_* vars are baked at build time Next.js inlines environment variables starting with NEXT_PUBLIC_ into the JavaScript bundle during next build. Setting them at runtime does nothing. If you build with placeholder values, the app tries to talk to “http://placeholder” in the browser. You must set the real values as build-time ARGs.
Gotcha: standalone output path in monorepos If your Next.js app is in a frontend/ subdirectory, the standalone output preserves that structure. server.js ends up at frontend/server.js, not the root. If your CMD points to the wrong path, the container crashes with “Cannot find module.” Check with: docker run --rm <image> find /app -name "server.js" -path "*/standalone/*"
Gotcha: next not found in monorepos If next is a dependency of frontend/package.json (not the root), the root bun install does not install it. Add a second step: RUN cd frontend && bun install.
Gotcha: output: ‘standalone’ must be committed Next.js does not produce a standalone directory unless you set output: 'standalone' in your next.config.js. If you set it locally but forget to commit, CI builds without it and the Dockerfile COPY step fails. Always run git status before pushing.

Also add a .dockerignore to keep builds fast and avoid baking junk into images:

node_modules
frontend/node_modules
.git
.next
frontend/.next
*.tar

The CI/CD Pipeline

Up to this point, deploying a change meant: build the image on your laptop, save it to a tar, SCP it to the server, import it into containerd, and run kubectl. Four manual steps across two machines for a color change. That is not a pipeline, that is a chore.

With Forgejo Actions, the workflow becomes: git push. That is it. Forgejo builds the image, pushes it to its own container registry, and tells k3s to update the deployment.

Setting Up the Runner

Forgejo Actions needs a runner to execute jobs. The runner runs as a pod in k3s alongside a Docker-in-Docker (DinD) sidecar that handles image builds. This is the trickiest part of the entire setup.

Enable Actions in Forgejo’s config:

kubectl exec -n forgejo deploy/forgejo -- sh -c \
  'cat >> /data/gitea/conf/app.ini << EOF

[actions]
ENABLED = true
DEFAULT_ACTIONS_URL = https://github.com
EOF'

kubectl rollout restart deployment/forgejo -n forgejo

Get a runner registration token from Forgejo’s admin panel (Site Administration > Actions > Runners > Create new Runner), then deploy the runner.

The runner deployment is a two-container pod. These are the decisions that took the most debugging:

  • The DinD container must run with DOCKER_TLS_CERTDIR: "" (TLS disabled). Otherwise job containers cannot access the Docker daemon because TLS certs are not mounted into them.
  • The runner config must set container.network: host so job containers share the pod network and can reach DinD on tcp://localhost:2375.
  • The runner must wait for DinD to be ready before starting. Without a health check loop, the runner crashes because the Docker daemon is not listening yet.
  • Add --insecure-registry flags to DinD pointing at your Forgejo instance so it can push images over HTTP.
apiVersion: v1
kind: ConfigMap
metadata:
  name: runner-config
  namespace: forgejo
data:
  config.yaml: |
    runner:
      capacity: 1
      timeout: 3600s
    container:
      network: host
      privileged: true
      docker_host: tcp://localhost:2375
      valid_volumes: []
      options: ""
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: forgejo-runner
  namespace: forgejo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: forgejo-runner
  template:
    metadata:
      labels:
        app: forgejo-runner
    spec:
      containers:
        - name: runner
          image: code.forgejo.org/forgejo/runner:6.3.1
          command:
            - sh
            - -c
            - |
              echo "Waiting for Docker..."
              for i in $(seq 1 60); do
                wget -q --spider http://localhost:2375/_ping 2>/dev/null && break
                sleep 2
              done
              echo "Docker is ready"
              if [ ! -f /data/.runner ]; then
                forgejo-runner register \
                  --instance http://forgejo.forgejo.svc:80 \
                  --token <your-registration-token> \
                  --name k3s-runner \
                  --labels ubuntu-latest:docker://node:20-bookworm \
                  --no-interactive
              fi
              cp /config/config.yaml /data/config.yaml
              forgejo-runner daemon --config /data/config.yaml
          env:
            - name: DOCKER_HOST
              value: tcp://localhost:2375
          volumeMounts:
            - name: runner-data
              mountPath: /data
            - name: runner-config
              mountPath: /config
        - name: docker
          image: docker:27-dind
          securityContext:
            privileged: true
          env:
            - name: DOCKER_TLS_CERTDIR
              value: ""
          args:
            - --insecure-registry=forgejo.forgejo.svc:80
            - --insecure-registry=<your-forgejo-ip>
          volumeMounts:
            - name: docker-data
              mountPath: /var/lib/docker
      volumes:
        - name: runner-data
          emptyDir: {}
        - name: docker-data
          emptyDir: {}
        - name: runner-config
          configMap:
            name: runner-config
Why is this so complicated? Forgejo Actions spawns a fresh Docker container for each CI job. That container needs to build Docker images, which means it needs access to a Docker daemon. But the job container and the daemon run in separate contexts. In GitHub Actions, this is hidden because their hosted runners have Docker pre-installed. When you self-host, you manage the plumbing yourself. The DinD sidecar pattern is the standard solution, but getting the networking, TLS, and startup ordering right is genuinely the hardest part of the pipeline setup.

Configure k3s to Pull from Forgejo

k3s needs to know how to pull images from your Forgejo registry over HTTP:

# /etc/rancher/k3s/registries.yaml
mirrors:
  "your-forgejo-ip":
    endpoint:
      - "http://your-forgejo-ip"

Restart k3s after adding this file.

Add Secrets to the Repository

In Forgejo, go to your repo’s Settings > Actions > Secrets and add:

Name Value
REGISTRY_USERYour Forgejo username
REGISTRY_PASSWORDYour Forgejo password
KUBECONFIG_DATAContents of /etc/rancher/k3s/k3s.yaml (change 127.0.0.1 to the node’s real IP)
SUPABASE_ANON_KEYThe anon JWT you generated

The Workflow File

Create .forgejo/workflows/deploy.yaml in your repo:

name: Build and Deploy

on:
  push:
    branches: [main]

env:
  DOCKER_HOST: tcp://localhost:2375

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: https://code.forgejo.org/actions/checkout@v4

      - name: Install Docker CLI
        run: |
          apt-get update && apt-get install -y ca-certificates curl
          install -m 0755 -d /etc/apt/keyrings
          curl -fsSL https://download.docker.com/linux/debian/gpg \
            -o /etc/apt/keyrings/docker.asc
          echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] \
            https://download.docker.com/linux/debian bookworm stable" \
            > /etc/apt/sources.list.d/docker.list
          apt-get update && apt-get install -y docker-ce-cli

      - name: Build image
        run: |
          docker build \
            --build-arg NEXT_PUBLIC_SUPABASE_URL=$SUPABASE_URL \
            --build-arg NEXT_PUBLIC_SUPABASE_ANON_KEY=${{ secrets.SUPABASE_ANON_KEY }} \
            -t registry/user/app:${{ github.sha }} \
            -t registry/user/app:latest \
            .

      - name: Push to registry
        run: |
          echo "${{ secrets.REGISTRY_PASSWORD }}" | \
            docker login your-registry \
            -u ${{ secrets.REGISTRY_USER }} --password-stdin
          docker push registry/user/app:${{ github.sha }}
          docker push registry/user/app:latest

      - name: Deploy to k3s
        run: |
          curl -LO "https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl"
          chmod +x kubectl
          mkdir -p ~/.kube
          echo '${{ secrets.KUBECONFIG_DATA }}' > ~/.kube/config
          chmod 600 ~/.kube/config
          ./kubectl set image deployment/coffee-app \
            coffee-app=registry/user/app:${{ github.sha }} \
            -n supabase
          ./kubectl rollout status deployment/coffee-app \
            -n supabase --timeout=120s

Commit and push. The pipeline triggers automatically. Within 2-3 minutes your app is rebuilt and redeployed.

Why install Docker CLI in every run? The job runs in a Node.js container that does not have Docker. We install the CLI so it can talk to the DinD daemon. This adds about 16 seconds to each build. A custom runner image with Docker pre-installed would eliminate this, but for a solo dev the simplicity of the stock image is worth the 16 seconds.

The Workflow After Setup

From now on, deploying a change to your coffee review app is:

# Change the header color, fix a bug, add a feature
git add .
git commit -m "make header blue"
git push
# Done. App updates in ~2 minutes.

No docker build. No SCP. No kubectl. Just push.

Every Gotcha We Hit

A consolidated table for the “I just want to search for my specific error” crowd:

Error Cause Fix
secrets-encrypt status: DisabledService file flags conflicting with config.yamlClean reinstall with flag in config.yaml from first boot
password authentication failed in ForgejoSSL set to “Require” but Postgres has no SSLChange to “Disable” in the install wizard
Push to create is not enabledForgejo requires repos to exist before pushingCreate repo in web UI first, or enable push-to-create
data directory has wrong ownershipOverriding the Supabase PG entrypointRemove command override, let entrypoint handle it
role "postgres" does not existSupabase PG17 only creates supabase_adminConnect as supabase_admin, create roles manually
JWSError JWSInvalidSignatureJWT signed with base64-decoded secretUse openssl rand -hex 32 for a plain string
permission denied for schema publicGoTrue’s role lacks CREATE on publicGRANT USAGE, CREATE ON SCHEMA public TO supabase_auth_admin
next: command not foundnext is a frontend dep, not rootInstall deps in both root and frontend directories
Cannot find module '/app/server.js'Standalone preserves subdirectory structureCMD ["node", "frontend/server.js"]
No data in the frontendNEXT_PUBLIC_* baked with placeholdersRebuild with real values as build ARGs
standalone directory not foundoutput: 'standalone' not committedCommit next.config.js before pushing
CI: Cannot connect to Docker daemonJob container cannot reach DinDcontainer.network: host in runner config, disable TLS
CI: open /certs/client/ca.pem: no such fileTLS enabled but certs not in job containerDOCKER_TLS_CERTDIR: "" on DinD
CI: runner crashes on startupRunner starts before DinD is readyHealth check loop (wget --spider) before daemon
CI: timeout: unmarshal !!intRunner config needs duration stringtimeout: 3600s not timeout: 3600
CI: node: executable not foundCheckout action needs Node.jsUse runs-on: ubuntu-latest mapped to a Node image

What Comes Next

What we built is a working pipeline: push code, app deploys. But it is not yet a full enterprise pipeline. Here is what the next layers look like:

Layer What It Adds Status
Tests in CIRun unit/integration tests before buildingEasy, just add a workflow step
Argo CDGitOps. Cluster state matches Git. Drift detection, auto-rollbackNext project
Image scanningTrivy scans for CVEs before deployOne workflow step
Staging environmentDeploy to staging, smoke test, promote to prodNeeds a second namespace
ObservabilityPrometheus + Grafana for metrics, Loki for logsSeparate deploy
Secrets managementOpenBao or Infisical instead of plaintext k8s secretsSeparate deploy

The difference between what we have and full GitOps is one layer: right now CI directly mutates the cluster with kubectl set image. With Argo CD, CI would update an image tag in a manifests Git repo, and Argo CD would sync the cluster to match. Nobody runs kubectl directly. Git becomes the audit log for every change. That is the pattern enterprises use, and it is the pattern you want if you ever have agents autonomously deploying services.

But for a solo dev shipping a toy coffee review app to see if the concept has legs, what we have is enough. Ship first, refine the pipeline when the pain justifies it.


Total recurring cost: $0/month. k3s, Supabase, Forgejo, MetalLB, and the CI runner are all open source. The only cost is electricity and hardware.

Compared to running this on Supabase Cloud ($25/month) + Vercel ($20/month) + GitHub (free until you need Actions minutes), self-hosting saves roughly $540/year per project. Multiply by a dozen experiments and the hardware pays for itself quickly.