I recently deployed a coffee review app (Next.js frontend, Supabase backend) entirely on a single k3s node with a full CI/CD pipeline. Push code, tests run, image builds, app deploys. No cloud services, no monthly bills. This post walks through every step and every gotcha.
If you have never touched Kubernetes, think of it like building a restaurant. Your server is the building. k3s is the restaurant manager who decides which chef works which station. Your app containers are the chefs. And the CI/CD pipeline is the hiring process that trains, vets, and assigns new chefs automatically whenever you update the menu. You do not need to micromanage any of it.
Table of Contents
- The Architecture
- Prerequisites
- k3s Housekeeping
- MetalLB: Giving Services Real IPs
- Forgejo: Your Own Git Server
- Deploying Supabase Postgres
- Restoring Your Database
- Deploying the Supabase API Layer
- JWT Tokens: The Handshake
- Containerizing the Frontend
- The CI/CD Pipeline
- Every Gotcha We Hit
- What Comes Next
The Architecture
Here is the full picture. A user opens a browser, hits the frontend. The frontend talks to Supabase through Kong (an API gateway). Supabase is really just a collection of services sitting in front of PostgreSQL. Forgejo hosts the code and container images, and its built-in CI runner builds and deploys everything automatically on each push.
Think of Kong as the front desk of a hotel. You do not walk directly into the kitchen (PostgreSQL). You tell the front desk what you want, and they route your request to the right department: PostgREST for data, GoTrue for login and signup.
Prerequisites
This guide assumes you have a working k3s node. Specifically:
- A server (VM or bare metal) running Ubuntu 24.04
- k3s installed and running (
kubectl get nodesshows Ready) - Traefik and servicelb disabled at install (we use MetalLB instead)
- A config file at
/etc/rancher/k3s/config.yaml - SSH key auth configured (no password login)
If starting from scratch, install k3s with a config like this:
# /etc/rancher/k3s/config.yaml
node-name: "k3s-node01"
write-kubeconfig-mode: "0644"
data-dir: "/mnt/k3s-data/k3s"
default-local-storage-path: "/mnt/k3s-data/local-storage"
cluster-cidr: "10.42.0.0/16"
service-cidr: "10.43.0.0/16"
flannel-backend: "vxlan"
disable:
- traefik
- servicelb
secrets-encryption: true
ExecStart=/usr/local/bin/k3s server with no flags.
k3s secrets-encrypt enable, it can fail silently or generate a config with no actual encryption key. The cleanest path is including secrets-encryption: true in config.yaml before the first start. If you already have a cluster without it, do a clean reinstall: uninstall k3s, delete the data directory, add the flag, start fresh. On a nearly empty cluster this takes 2 minutes.
k3s Housekeeping
Before deploying anything, harden the node. 10 minutes now prevents headaches later.
SSH Key Auth
Generate a key on your workstation (ssh-keygen -t ed25519), copy it to the node (ssh-copy-id user@your-node), verify you can log in without a password, then disable password auth:
sudo sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
sudo systemctl restart ssh
Always keep an existing terminal open when changing SSH config. That open session is your lifeline if you lock yourself out.
Stable IP
Your k3s node needs a fixed IP. Set a DHCP reservation in your router mapping the node’s MAC address to a specific IP, or configure a static IP inside Ubuntu. If the IP changes, everything downstream breaks.
Disable Swap
sudo swapoff -a
sudo sed -i '/swap/d' /etc/fstab
Backup
If you are on Proxmox, set up a VM backup before deploying workloads. A weekly snapshot with ZSTD compression gives you a rollback point. For a mostly-empty VM, a compressed backup is only a few gigabytes.
MetalLB: Giving Services Real IPs
In the cloud, when you create a Kubernetes Service of type LoadBalancer, AWS or GCP assigns a public IP automatically. On bare metal, nothing does that. MetalLB fills the gap.
Think of MetalLB as a receptionist handing out phone extensions. When a new service says “I need to be reachable,” MetalLB assigns it an IP from a pool you define.
Carve Out an IP Range
Pick a range on your local subnet that does not overlap with your DHCP pool. If your DHCP hands out .100-.199, give MetalLB .200-.250.
Install MetalLB
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.9/config/manifests/metallb-native.yaml
kubectl wait --namespace metallb-system \
--for=condition=ready pod \
--selector=app=metallb \
--timeout=90s
Configure the IP Pool
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.200-192.168.1.250 # adjust for your subnet
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default-l2
namespace: metallb-system
spec:
ipAddressPools:
- default-pool
L2 mode means MetalLB responds to ARP requests on your local network. Simple, no BGP routers needed, perfect for a home setup.
Forgejo: Your Own Git Server
Forgejo is a lightweight, open-source Git server (a fork of Gitea) with a built-in container registry and CI system. We deploy it on k3s so we have a place to push code, store Docker images, and run CI pipelines without depending on GitHub.
The deployment has two parts: a Postgres database for Forgejo’s internal data, and Forgejo itself. Both get persistent volume claims so data survives pod restarts.
# Forgejo's Postgres
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: forgejo
spec:
replicas: 1
selector:
matchLabels:
app: forgejo-postgres
template:
metadata:
labels:
app: forgejo-postgres
spec:
containers:
- name: postgres
image: postgres:16-alpine
env:
- name: POSTGRES_USER
value: forgejo
- name: POSTGRES_PASSWORD
value: <your-password>
- name: POSTGRES_DB
value: forgejo
ports:
- containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
subPath: pgdata
volumes:
- name: data
persistentVolumeClaim:
claimName: forgejo-postgres
Give Forgejo a LoadBalancer service and MetalLB assigns it an IP. Open that in a browser, run through the setup wizard, and you have a local GitHub.
ENABLE_PUSH_CREATE_USER = true under [repository] in Forgejo’s app.ini. Then git push to a non-existent repo auto-creates it.
Deploying Supabase Postgres
Supabase uses a special PostgreSQL image (supabase/postgres) pre-loaded with extensions like pgvector, pg_graphql, and PostGIS. This is separate from Forgejo’s Postgres. Deploy it in a supabase namespace.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: supabase
spec:
replicas: 1
selector:
matchLabels:
app: supabase-db
template:
metadata:
labels:
app: supabase-db
spec:
containers:
- name: postgres
image: supabase/postgres:17.6.1.075
env:
- name: POSTGRES_PASSWORD
value: <your-db-password>
- name: POSTGRES_DB
value: postgres
ports:
- containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
subPath: pgdata
volumes:
- name: data
persistentVolumeClaim:
claimName: supabase-db
command: ["postgres", "-c", "config_file=..."] like some docs suggest. Do not do this with the Supabase PG17 image. The image’s entrypoint handles the root-to-postgres user switch internally. Overriding it causes Postgres to crash with “data directory has wrong ownership.” Let the entrypoint do its thing.
supabase_admin as the superuser, not postgres. Connect with: psql "postgresql://supabase_admin:<password>@localhost:5432/postgres"
pgjwt and timescaledb. If your backup references these, strip those lines (sed -i '/CREATE EXTENSION.*pgjwt/d' backup.sql) or use the PG15 image.
Restoring Your Database
A fresh Supabase Postgres image is missing the roles and schemas that Supabase Cloud had. Create them before restoring a backup.
Create Roles and Schemas
-- Schemas
CREATE SCHEMA IF NOT EXISTS auth;
CREATE SCHEMA IF NOT EXISTS storage;
CREATE SCHEMA IF NOT EXISTS extensions;
-- Core roles
CREATE ROLE postgres WITH LOGIN SUPERUSER PASSWORD '<password>';
CREATE ROLE anon NOLOGIN NOINHERIT;
CREATE ROLE authenticated NOLOGIN NOINHERIT;
CREATE ROLE service_role NOLOGIN NOINHERIT BYPASSRLS;
-- PostgREST connector
CREATE ROLE authenticator WITH LOGIN PASSWORD '<password>' NOINHERIT;
GRANT anon TO authenticator;
GRANT authenticated TO authenticator;
GRANT service_role TO authenticator;
-- Service-specific roles
CREATE ROLE supabase_auth_admin WITH LOGIN PASSWORD '<password>' NOINHERIT;
GRANT ALL ON SCHEMA auth TO supabase_auth_admin;
GRANT USAGE, CREATE ON SCHEMA public TO supabase_auth_admin;
CREATE ROLE supabase_storage_admin WITH LOGIN PASSWORD '<password>' NOINHERIT;
GRANT ALL ON SCHEMA storage TO supabase_storage_admin;
GRANT USAGE ON SCHEMA public TO anon, authenticated, service_role;
ALTER SCHEMA public OWNER TO postgres;
anon is for unauthenticated API users. authenticated is for logged-in users. service_role bypasses RLS for server-side operations. authenticator is what PostgREST connects as, switching to the appropriate role based on the JWT in each request. Think of it like a bouncer at a club who checks your ID and gives you a different wristband depending on your ticket type.
Restore and Grant
# Copy backup into the pod
kubectl cp backup.sql supabase/<pod-name>:/tmp/backup.sql
# Restore
kubectl exec -n supabase deploy/postgres -- \
psql "postgresql://supabase_admin:<password>@localhost:5432/postgres" \
-f /tmp/backup.sql
You will see hundreds of errors for missing roles (role "postgres" does not exist). This is expected. The tables, indexes, and data all loaded. The errors are GRANT statements that reference roles that did not exist at dump time. After creating the roles, fix permissions:
GRANT ALL ON ALL TABLES IN SCHEMA public TO postgres, anon, authenticated, service_role;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO postgres, anon, authenticated, service_role;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO anon, authenticated, service_role;
kubectl exec often gets mangled by the shell. Write your SQL to a file, kubectl cp it into the pod, and run with -f. Boring but reliable.
Deploying the Supabase API Layer
Supabase is not just Postgres. At minimum you need three services for a working API:
| Service | Image | What It Does |
|---|---|---|
| PostgREST | postgrest/postgrest:v12.2.3 | Turns your Postgres tables into a REST API automatically |
| GoTrue | supabase/gotrue:v2.158.1 | Handles user signup, login, JWT issuance |
| Kong | kong:3.5 | API gateway that routes /rest/v1/ to PostgREST and /auth/v1/ to GoTrue |
PostgREST is a translator. Your app speaks HTTP (“give me all coffees rated above 4 stars”). PostgREST translates that into SQL and hands back JSON. GoTrue is the bouncer: checks passwords, issues JWT tokens, tells PostgREST what role each request runs as.
Kong is the front door. It listens on port 8000 via a LoadBalancer service and routes requests to the right backend. Its config is a YAML file mounted as a ConfigMap:
_format_version: "2.1"
_transform: true
services:
- name: rest-v1
url: http://rest:3000/
routes:
- name: rest-v1
strip_path: true
paths:
- /rest/v1/
plugins:
- name: cors
- name: auth-v1
url: http://auth:9999/
routes:
- name: auth-v1
strip_path: true
paths:
- /auth/v1/
plugins:
- name: cors
supabase_auth_admin only has permissions on the auth schema, it crashes with “permission denied for schema public.” Grant USAGE, CREATE ON SCHEMA public TO supabase_auth_admin before starting GoTrue.
JWT Tokens: The Handshake
Every Supabase service shares a JWT secret. This is the password they all use to verify that a request is legitimate. The flow:
- Your frontend sends
Authorization: Bearer <token> - Kong passes it to PostgREST
- PostgREST verifies the signature using the shared secret
- PostgREST reads the
roleclaim (anon or authenticated) - PostgREST runs the SQL as that role, so RLS policies apply
Generate a secret and two tokens:
# Plain hex string -- important, not base64
openssl rand -hex 32
import hmac, hashlib, base64, json
secret = "your-hex-secret-here"
def make_jwt(payload, secret):
header = base64.urlsafe_b64encode(
json.dumps({"alg":"HS256","typ":"JWT"}).encode()
).rstrip(b'=').decode()
body = base64.urlsafe_b64encode(
json.dumps(payload).encode()
).rstrip(b'=').decode()
sig_input = f"{header}.{body}"
sig = base64.urlsafe_b64encode(
hmac.new(secret.encode(), sig_input.encode(), hashlib.sha256).digest()
).rstrip(b'=').decode()
return f"{header}.{body}.{sig}"
anon = make_jwt({"role":"anon","iss":"supabase",
"iat":1735689600,"exp":2051222400}, secret)
service = make_jwt({"role":"service_role","iss":"supabase",
"iat":1735689600,"exp":2051222400}, secret)
print(f"ANON_KEY={anon}")
print(f"SERVICE_ROLE_KEY={service}")
Store the secret and tokens in a Kubernetes Secret so all services reference them.
openssl rand -base64 32, PostgREST uses that string as-is for HMAC verification. But many JWT libraries base64-decode the secret first before signing. The signatures will not match and you get JWSError JWSInvalidSignature. Use openssl rand -hex 32 for a plain string. No encoding ambiguity.
Containerizing the Frontend
The coffee review app is a Next.js app with Bun as the package manager. Containerizing it requires a multi-stage Docker build:
FROM oven/bun:1 AS deps
WORKDIR /app
COPY package.json bun.lock ./
COPY frontend/package.json ./frontend/
RUN bun install
RUN cd frontend && bun install
FROM oven/bun:1 AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=deps /app/frontend/node_modules ./frontend/node_modules
COPY . .
ARG NEXT_PUBLIC_SUPABASE_URL
ARG NEXT_PUBLIC_SUPABASE_ANON_KEY
ENV NEXT_PUBLIC_SUPABASE_URL=$NEXT_PUBLIC_SUPABASE_URL
ENV NEXT_PUBLIC_SUPABASE_ANON_KEY=$NEXT_PUBLIC_SUPABASE_ANON_KEY
RUN cd frontend && ./node_modules/.bin/next build
FROM node:20-slim AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY --from=builder /app/frontend/.next/standalone ./
COPY --from=builder /app/frontend/.next/static ./frontend/.next/static
EXPOSE 3000
CMD ["node", "frontend/server.js"]
NEXT_PUBLIC_ into the JavaScript bundle during next build. Setting them at runtime does nothing. If you build with placeholder values, the app tries to talk to “http://placeholder” in the browser. You must set the real values as build-time ARGs.
frontend/ subdirectory, the standalone output preserves that structure. server.js ends up at frontend/server.js, not the root. If your CMD points to the wrong path, the container crashes with “Cannot find module.” Check with: docker run --rm <image> find /app -name "server.js" -path "*/standalone/*"
next is a dependency of frontend/package.json (not the root), the root bun install does not install it. Add a second step: RUN cd frontend && bun install.
output: 'standalone' in your next.config.js. If you set it locally but forget to commit, CI builds without it and the Dockerfile COPY step fails. Always run git status before pushing.
Also add a .dockerignore to keep builds fast and avoid baking junk into images:
node_modules
frontend/node_modules
.git
.next
frontend/.next
*.tar
The CI/CD Pipeline
Up to this point, deploying a change meant: build the image on your laptop, save it to a tar, SCP it to the server, import it into containerd, and run kubectl. Four manual steps across two machines for a color change. That is not a pipeline, that is a chore.
With Forgejo Actions, the workflow becomes: git push. That is it. Forgejo builds the image, pushes it to its own container registry, and tells k3s to update the deployment.
Setting Up the Runner
Forgejo Actions needs a runner to execute jobs. The runner runs as a pod in k3s alongside a Docker-in-Docker (DinD) sidecar that handles image builds. This is the trickiest part of the entire setup.
Enable Actions in Forgejo’s config:
kubectl exec -n forgejo deploy/forgejo -- sh -c \
'cat >> /data/gitea/conf/app.ini << EOF
[actions]
ENABLED = true
DEFAULT_ACTIONS_URL = https://github.com
EOF'
kubectl rollout restart deployment/forgejo -n forgejo
Get a runner registration token from Forgejo’s admin panel (Site Administration > Actions > Runners > Create new Runner), then deploy the runner.
The runner deployment is a two-container pod. These are the decisions that took the most debugging:
- The DinD container must run with
DOCKER_TLS_CERTDIR: ""(TLS disabled). Otherwise job containers cannot access the Docker daemon because TLS certs are not mounted into them. - The runner config must set
container.network: hostso job containers share the pod network and can reach DinD ontcp://localhost:2375. - The runner must wait for DinD to be ready before starting. Without a health check loop, the runner crashes because the Docker daemon is not listening yet.
- Add
--insecure-registryflags to DinD pointing at your Forgejo instance so it can push images over HTTP.
apiVersion: v1
kind: ConfigMap
metadata:
name: runner-config
namespace: forgejo
data:
config.yaml: |
runner:
capacity: 1
timeout: 3600s
container:
network: host
privileged: true
docker_host: tcp://localhost:2375
valid_volumes: []
options: ""
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: forgejo-runner
namespace: forgejo
spec:
replicas: 1
selector:
matchLabels:
app: forgejo-runner
template:
metadata:
labels:
app: forgejo-runner
spec:
containers:
- name: runner
image: code.forgejo.org/forgejo/runner:6.3.1
command:
- sh
- -c
- |
echo "Waiting for Docker..."
for i in $(seq 1 60); do
wget -q --spider http://localhost:2375/_ping 2>/dev/null && break
sleep 2
done
echo "Docker is ready"
if [ ! -f /data/.runner ]; then
forgejo-runner register \
--instance http://forgejo.forgejo.svc:80 \
--token <your-registration-token> \
--name k3s-runner \
--labels ubuntu-latest:docker://node:20-bookworm \
--no-interactive
fi
cp /config/config.yaml /data/config.yaml
forgejo-runner daemon --config /data/config.yaml
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
volumeMounts:
- name: runner-data
mountPath: /data
- name: runner-config
mountPath: /config
- name: docker
image: docker:27-dind
securityContext:
privileged: true
env:
- name: DOCKER_TLS_CERTDIR
value: ""
args:
- --insecure-registry=forgejo.forgejo.svc:80
- --insecure-registry=<your-forgejo-ip>
volumeMounts:
- name: docker-data
mountPath: /var/lib/docker
volumes:
- name: runner-data
emptyDir: {}
- name: docker-data
emptyDir: {}
- name: runner-config
configMap:
name: runner-config
Configure k3s to Pull from Forgejo
k3s needs to know how to pull images from your Forgejo registry over HTTP:
# /etc/rancher/k3s/registries.yaml
mirrors:
"your-forgejo-ip":
endpoint:
- "http://your-forgejo-ip"
Restart k3s after adding this file.
Add Secrets to the Repository
In Forgejo, go to your repo’s Settings > Actions > Secrets and add:
| Name | Value |
|---|---|
REGISTRY_USER | Your Forgejo username |
REGISTRY_PASSWORD | Your Forgejo password |
KUBECONFIG_DATA | Contents of /etc/rancher/k3s/k3s.yaml (change 127.0.0.1 to the node’s real IP) |
SUPABASE_ANON_KEY | The anon JWT you generated |
The Workflow File
Create .forgejo/workflows/deploy.yaml in your repo:
name: Build and Deploy
on:
push:
branches: [main]
env:
DOCKER_HOST: tcp://localhost:2375
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: https://code.forgejo.org/actions/checkout@v4
- name: Install Docker CLI
run: |
apt-get update && apt-get install -y ca-certificates curl
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg \
-o /etc/apt/keyrings/docker.asc
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/debian bookworm stable" \
> /etc/apt/sources.list.d/docker.list
apt-get update && apt-get install -y docker-ce-cli
- name: Build image
run: |
docker build \
--build-arg NEXT_PUBLIC_SUPABASE_URL=$SUPABASE_URL \
--build-arg NEXT_PUBLIC_SUPABASE_ANON_KEY=${{ secrets.SUPABASE_ANON_KEY }} \
-t registry/user/app:${{ github.sha }} \
-t registry/user/app:latest \
.
- name: Push to registry
run: |
echo "${{ secrets.REGISTRY_PASSWORD }}" | \
docker login your-registry \
-u ${{ secrets.REGISTRY_USER }} --password-stdin
docker push registry/user/app:${{ github.sha }}
docker push registry/user/app:latest
- name: Deploy to k3s
run: |
curl -LO "https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl"
chmod +x kubectl
mkdir -p ~/.kube
echo '${{ secrets.KUBECONFIG_DATA }}' > ~/.kube/config
chmod 600 ~/.kube/config
./kubectl set image deployment/coffee-app \
coffee-app=registry/user/app:${{ github.sha }} \
-n supabase
./kubectl rollout status deployment/coffee-app \
-n supabase --timeout=120s
Commit and push. The pipeline triggers automatically. Within 2-3 minutes your app is rebuilt and redeployed.
The Workflow After Setup
From now on, deploying a change to your coffee review app is:
# Change the header color, fix a bug, add a feature
git add .
git commit -m "make header blue"
git push
# Done. App updates in ~2 minutes.
No docker build. No SCP. No kubectl. Just push.
Every Gotcha We Hit
A consolidated table for the “I just want to search for my specific error” crowd:
| Error | Cause | Fix |
|---|---|---|
secrets-encrypt status: Disabled | Service file flags conflicting with config.yaml | Clean reinstall with flag in config.yaml from first boot |
password authentication failed in Forgejo | SSL set to “Require” but Postgres has no SSL | Change to “Disable” in the install wizard |
Push to create is not enabled | Forgejo requires repos to exist before pushing | Create repo in web UI first, or enable push-to-create |
data directory has wrong ownership | Overriding the Supabase PG entrypoint | Remove command override, let entrypoint handle it |
role "postgres" does not exist | Supabase PG17 only creates supabase_admin | Connect as supabase_admin, create roles manually |
JWSError JWSInvalidSignature | JWT signed with base64-decoded secret | Use openssl rand -hex 32 for a plain string |
permission denied for schema public | GoTrue’s role lacks CREATE on public | GRANT USAGE, CREATE ON SCHEMA public TO supabase_auth_admin |
next: command not found | next is a frontend dep, not root | Install deps in both root and frontend directories |
Cannot find module '/app/server.js' | Standalone preserves subdirectory structure | CMD ["node", "frontend/server.js"] |
| No data in the frontend | NEXT_PUBLIC_* baked with placeholders | Rebuild with real values as build ARGs |
| standalone directory not found | output: 'standalone' not committed | Commit next.config.js before pushing |
CI: Cannot connect to Docker daemon | Job container cannot reach DinD | container.network: host in runner config, disable TLS |
CI: open /certs/client/ca.pem: no such file | TLS enabled but certs not in job container | DOCKER_TLS_CERTDIR: "" on DinD |
| CI: runner crashes on startup | Runner starts before DinD is ready | Health check loop (wget --spider) before daemon |
CI: timeout: unmarshal !!int | Runner config needs duration string | timeout: 3600s not timeout: 3600 |
CI: node: executable not found | Checkout action needs Node.js | Use runs-on: ubuntu-latest mapped to a Node image |
What Comes Next
What we built is a working pipeline: push code, app deploys. But it is not yet a full enterprise pipeline. Here is what the next layers look like:
| Layer | What It Adds | Status |
|---|---|---|
| Tests in CI | Run unit/integration tests before building | Easy, just add a workflow step |
| Argo CD | GitOps. Cluster state matches Git. Drift detection, auto-rollback | Next project |
| Image scanning | Trivy scans for CVEs before deploy | One workflow step |
| Staging environment | Deploy to staging, smoke test, promote to prod | Needs a second namespace |
| Observability | Prometheus + Grafana for metrics, Loki for logs | Separate deploy |
| Secrets management | OpenBao or Infisical instead of plaintext k8s secrets | Separate deploy |
The difference between what we have and full GitOps is one layer: right now CI directly mutates the cluster with kubectl set image. With Argo CD, CI would update an image tag in a manifests Git repo, and Argo CD would sync the cluster to match. Nobody runs kubectl directly. Git becomes the audit log for every change. That is the pattern enterprises use, and it is the pattern you want if you ever have agents autonomously deploying services.
But for a solo dev shipping a toy coffee review app to see if the concept has legs, what we have is enough. Ship first, refine the pipeline when the pain justifies it.
Total recurring cost: $0/month. k3s, Supabase, Forgejo, MetalLB, and the CI runner are all open source. The only cost is electricity and hardware.
Compared to running this on Supabase Cloud ($25/month) + Vercel ($20/month) + GitHub (free until you need Actions minutes), self-hosting saves roughly $540/year per project. Multiply by a dozen experiments and the hardware pays for itself quickly.
Recent Comments