From Code to Live App in Your Homelab: Your Personal Cloud, No Bills Attached!
Ever looked at services like Vercel, Netlify, or Heroku and thought, “Wow, deploying my web app is so easy, just connect my GitHub and poof it’s live!”? It’s magical! But… that magic often comes with limitations on free tiers or escalating costs as your projects grow.
What if you could have that same “push-to-deploy” magic, but running entirely on your own hardware in your homelab? Imagine having your own private, powerful deployment platform using that beefy Proxmox server sitting in the corner, without paying recurring cloud provider bills for the core service.
Good news: You absolutely can! If you have a Proxmox server with decent CPU, RAM, and storage, you have the foundation for building your own robust deployment system. This guide will walk you through setting it up, explaining the concepts simply, even if you’re not a command-line guru. Our goal? Get your code from a GitHub repository to a live, running application accessible via a URL, mostly automatically after the initial setup.
The Big Picture: An Automated Restaurant Kitchen
Think of deploying software like running a high-tech restaurant kitchen.
- The Recipe Book (Git & GitHub): Your application code lives here. Every change is tracked, like edits to a master recipe. GitHub is the cloud library holding these books.
- The Prep Cooks (CI/CD – GitHub Actions Runner): When a recipe (code) changes, automated cooks spring into action. They follow instructions to prepare the ingredients – testing the recipe, mixing things, and putting the dish components into standardized containers (Docker). We’ll have these cooks working inside your homelab.
- Standardized Containers (Docker): Imagine every dish component perfectly packaged in a standard box. This ensures it’s the same everywhere, from the prep station to the final plate. Docker does this for your application code.
- The Kitchen Stations (Kubernetes – K3s): This is the organized layout of your kitchen – ovens, fryers, plating areas. Kubernetes (we’ll use a lightweight version called K3s) manages where and how many containers (dishes) are running, making sure everything works smoothly even if one station gets busy. It runs on a virtual machine inside your Proxmox server.
- The Head Chef (GitOps – ArgoCD): This is the manager who only looks at the master Recipe Book (Git). They constantly walk around the kitchen stations (Kubernetes) ensuring exactly what’s in the book is being prepared. If a cook makes something wrong, the Head Chef spots the difference and corrects it based on the book. This is called GitOps – Git is the source of truth.
- The Secure Delivery Service (Cloudflare Tunnel): You need a way for customers (users) to securely order and receive their food (access your app) without exposing your entire kitchen address to the world. Cloudflare Tunnel provides a secure, private tunnel from the internet directly to your app.
The Tools We’ll Use (Mostly Free & Open Source!)
Here’s a quick rundown of the tools we’ll be using:
Tool | Role in Analogy | What it Does | Why We Use It |
---|---|---|---|
Proxmox | The Building/Power | Hosts our virtual machines (VMs) | You already have it! Provides the hardware base. |
Ubuntu VM | Kitchen Room | A virtual computer inside Proxmox where our software kitchen will run. | Standard, reliable Linux environment. |
K3s (Kubernetes) | Kitchen Stations | Manages and runs our application containers. | Lightweight, easy-to-install Kubernetes. |
Docker | Standardized Containers | Packages your application code and dependencies. | Industry standard for containerization. |
GitHub | Recipe Book Library | Stores your code (recipes) and tracks changes. | Popular, free for public/private repos. |
GitHub Actions Runner (Self-Hosted) | Prep Cooks (In-House) | Runs the automated build/test process on your hardware. | Free, integrates with GitHub, runs locally. |
ArgoCD | Head Chef (GitOps Manager) | Watches GitHub for changes and updates Kubernetes automatically. | Powerful GitOps tool, provides a UI. |
Cloudflare Tunnel | Secure Delivery Service | Securely exposes your app to the internet without opening firewall ports. | Free, secure, easy DNS integration. |
Let’s start building!
Phase 1: Setting the Stage (The Kitchen Foundation – Proxmox & K3s)
First, we need a place for our kitchen stations (K3s) to live. We’ll create a virtual computer (VM) inside your Proxmox setup.
- Create an Ubuntu VM in Proxmox:
- Log into your Proxmox web interface (
https://<your-proxmox-ip>:8006
). - Download an Ubuntu Server cloud image (like 22.04 LTS).
- Create a new VM. Give it a good amount of resources (e.g., 4 CPU cores, 8GB RAM, 50GB+ disk). The exact steps involve importing the disk image and setting options – Proxmox documentation or online guides can help here if you’re new to creating VMs. Make sure networking is set to Bridged mode (usually
vmbr0
).
- Log into your Proxmox web interface (
- SSH into your new VM: Once the VM is running, find its IP address in Proxmox. Open a terminal on your main computer and connect:
# Replace <vm_ip_address> with the actual IP # The default username for Ubuntu cloud images is often 'ubuntu' ssh ubuntu@<vm_ip_address>
- Install K3s (The Kitchen Stations): This single command installs our lightweight Kubernetes system.
curl -sfL https://get.k3s.io | sh -
Wait for it to complete.
- Check if K3s is Running: This command asks Kubernetes, “Show me my stations (nodes).” You should see one node listed (your VM).
sudo kubectl get nodes
(Think of it like this: You’ve just built the room and installed the main cooking appliances.)
Phase 2: Hiring the Head Chef (The Brains – ArgoCD)
Now, we need the manager who ensures everything matches the recipe book (Git). That’s ArgoCD.
- Install ArgoCD: These commands tell Kubernetes to install ArgoCD.
# Create a dedicated 'department' for ArgoCD sudo kubectl create namespace argocd # Apply the official ArgoCD installation recipe sudo kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
Wait a few minutes for everything to start up.
- Get the Initial Admin Password: ArgoCD secures itself with a default password. This command retrieves it.
sudo kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Copy this password somewhere safe!
- Access the ArgoCD Web UI (Temporarily): To check it out, we can forward the service directly to your computer. Open another terminal window on your main computer (leave the SSH session open) and run:
kubectl port-forward svc/argocd-server -n argocd 8080:443 --kubeconfig <path_to_your_k3s_config>
(You’ll need to copy the K3s config file from your VM
/etc/rancher/k3s/k3s.yaml
to your local machine first, or use SSH remote forwarding. For simplicity now, you could also installkubectl
on the VM itself and runsudo kubectl port-forward ... &
directly on the VM, then accesshttp://<vm_ip_address>:8080
)Now, open your web browser and go to
https://localhost:8080
. Log in with usernameadmin
and the password you copied. This is your deployment command center!
(The Head Chef is hired and has an office!)
Phase 3: Setting Up the Assembly Line (CI/CD – GitHub Actions)
We need the automated process (Prep Cooks) that takes code changes, packages them (Docker), and tells the Head Chef (ArgoCD) about the new recipe version. We’ll use GitHub Actions, but run the “cooks” (runners) ourselves, inside our homelab, so it’s free!
- Set up a Self-Hosted Runner:
- Go to your GitHub repository settings > Actions > Runners > New self-hosted runner.
- Choose Linux / x64.
- Follow the instructions provided there. You’ll download a small program onto your Ubuntu VM, configure it to connect to your specific repository, and start it. It’s best to run it as a service so it stays running.
# On your Ubuntu VM (Example commands - use ones from GitHub!) # Create a directory for the runner mkdir actions-runner && cd actions-runner # Download the runner package (version might differ) curl -o actions-runner-linux-x64-2.3xx.x.tar.gz -L https://github.com/actions/runner/releases/download/v2.3xx.x/actions-runner-linux-x64-2.3xx.x.tar.gz # Extract the installer tar xzf ./actions-runner-linux-x64-2.3xx.x.tar.gz # Configure it (use the token from GitHub) ./config.sh --url https://github.com/YOUR_USERNAME/YOUR_REPO --token YOUR_TOKEN # Install and start the service (recommended) sudo ./svc.sh install sudo ./svc.sh start
Now, GitHub knows it can send jobs to your VM.
- Create the GitHub Actions Workflow: In your code repository, create a file named
.github/workflows/deploy.yml
. This file tells the runner what to do when you push code.# .github/workflows/deploy.yml name: Build and Deploy to Homelab on: push: branches: [ main ] # Trigger this workflow on pushes to the main branch jobs: build-and-update-manifest: runs-on: self-hosted # IMPORTANT: Tells GitHub to use YOUR runner! steps: - name: Check out code uses: actions/checkout@v3 with: # We need repo access to push the manifest change token: ${{ secrets.PAT_TOKEN }} # You'll need to create a Personal Access Token with repo scope - name: Set up Docker Buildx uses: docker/setup-buildx-action@v2 - name: Login to GitHub Container Registry (GHCR) uses: docker/login-action@v2 with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} # Provided automatically by GitHub - name: Build and push Docker image id: build-push uses: docker/build-push-action@v4 with: context: . # Assumes Dockerfile is in the root push: true tags: ghcr.io/${{ github.repository }}:${{ github.sha }} # Tag with unique commit ID # Optional: Add a 'latest' tag too if needed: # tags: | # ghcr.io/${{ github.repository }}:${{ github.sha }} # ghcr.io/${{ github.repository }}:latest - name: Update Kubernetes manifest (IMPORTANT GitOps step) run: | # Assumes your deployment manifest is at k8s/deployment.yaml # This command replaces the image line with the newly built one sed -i "s|image: ghcr.io/${{ github.repository }}:.*|image: ghcr.io/${{ github.repository }}:${{ github.sha }}|g" k8s/deployment.yaml - name: Commit and push manifest changes run: | git config --global user.name 'GitHub Actions CI' git config --global user.email 'ci@github.actions' git add k8s/deployment.yaml # Only commit if there are changes git diff --staged --quiet || git commit -m "Update deployment image to ${{ github.sha }}" git push
Important Notes for this step:
- You need a
Dockerfile
in your repository to tell Docker how to build your app’s container image. - You need Kubernetes manifest files (like
k8s/deployment.yaml
) in your repository. These are the recipes ArgoCD reads. - You need to create a GitHub Personal Access Token (PAT) with
repo
scope and add it as a secret namedPAT_TOKEN
in your repository settings (Settings > Secrets and variables > Actions). This allows the action to push the manifest changes back to your repo.
- You need a
(The assembly line is designed! When code changes, it gets packaged, and the recipe book is updated with the new package ID.)
Phase 4: Preparing the Recipes & Serving (Kubernetes Manifests & Cloudflare)
- Create Kubernetes Manifests: In your GitHub repo (e.g., in a
k8s/
directory), create files defining how your app should run.k8s/deployment.yaml
: Tells K3s how to run your app container (how many copies, which image to use). This is the file the GitHub Action updates.# k8s/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-cool-app # Choose a name for your app spec: replicas: 2 # Run 2 copies for reliability selector: matchLabels: app: my-cool-app template: metadata: labels: app: my-cool-app spec: containers: - name: app # !! THIS LINE GETS UPDATED BY GITHUB ACTIONS !! image: ghcr.io/YOUR_USERNAME/YOUR_REPO:initial # Placeholder image ports: - containerPort: 8080 # The port your app listens on INSIDE the container
k8s/service.yaml
: Gives your app pods a stable internal network address.# k8s/service.yaml apiVersion: v1 kind: Service metadata: name: my-cool-app-service spec: selector: app: my-cool-app # Selects the pods from the Deployment ports: - protocol: TCP port: 80 # The port the SERVICE listens on targetPort: 8080 # The port on the CONTAINER to send traffic to type: ClusterIP # Only reachable inside the cluster initially
k8s/ingress.yaml
: (Optional but recommended for web access) Describes how external traffic reaches your service. We’ll configure Cloudflare to handle this securely later.# k8s/ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-cool-app-ingress # Annotations needed if using cert-manager for auto-SSL later # annotations: # cert-manager.io/cluster-issuer: letsencrypt-prod spec: rules: - host: my-cool-app.yourdomain.com # The URL you want to use http: paths: - path: / pathType: Prefix backend: service: name: my-cool-app-service # Points to the Service above port: number: 80 # Points to the Service's port # tls: # Section for enabling HTTPS later # - hosts: # - my-cool-app.yourdomain.com # secretName: my-cool-app-tls # K8s will store the certificate here
- Tell ArgoCD About Your App: In the ArgoCD UI (or using a YAML file applied via
kubectl
), create a new Application:- Application Name:
my-cool-app
- Project:
default
- Sync Policy:
Automatic
(EnablePrune Resources
andSelf Heal
) - Repository URL:
https://github.com/YOUR_USERNAME/YOUR_REPO.git
- Revision:
HEAD
(ormain
) - Path:
k8s
(the directory in your repo holding the YAML files) - Cluster URL:
https://kubernetes.default.svc
- Namespace:
default
(or create a dedicated one likemy-cool-app
) - Click Create.
ArgoCD will now fetch those YAML files and apply them to K3s. Your app will start running using the placeholder image!
- Application Name:
- Set up Cloudflare Tunnel (Secure Delivery):
- Sign up for a free Cloudflare account and add your domain.
- In the Cloudflare dashboard, go to
Zero Trust
. - Navigate to
Access
->Tunnels
. - Click
Create a tunnel
. Choosecloudflared
. Give it a name (e.g.,homelab-k8s
). - Follow the instructions to install
cloudflared
(you might already have it if you run other homelab services) and run the command to connect it to Cloudflare. It’s often best to runcloudflared
inside your K3s cluster using its official Helm chart or deployment methods for high availability. - Once connected, configure the Tunnel’s
Public Hostname
:- Subdomain:
my-cool-app
- Domain:
yourdomain.com
- Service Type:
HTTP
- Service URL:
http://my-cool-app-service.default.svc.cluster.local
(This uses Kubernetes internal DNS to point directly to your app’s Service inside the cluster. Adjust the namespacedefault
if you used a different one).
- Subdomain:
- Save the hostname and the tunnel configuration.
(The recipes are written, and the secure delivery route is established!)
Putting It All Together: The Magic Moment! ✨
- Make a code change to your application in your GitHub repository.
- Commit and
git push
the changes to themain
branch. - Watch the Automation:
- Go to the “Actions” tab in your GitHub repo. You’ll see your
Build and Deploy to Homelab
workflow trigger and run on your self-hosted runner. - It will build the Docker image and push it to GHCR.
- It will update the
image:
tag in yourk8s/deployment.yaml
file and push that change back to GitHub.
- Go to the “Actions” tab in your GitHub repo. You’ll see your
- Check ArgoCD:
- Go to your ArgoCD UI. Within a minute or two, it will detect the change in the
k8s/deployment.yaml
file in GitHub. - The status of your
my-cool-app
application in ArgoCD will change toOutOfSync
. - Because you set the Sync Policy to
Automatic
, ArgoCD will automatically apply the change to K3s. You’ll see it update the Deployment, and K3s will perform a rolling update, replacing the old pods with new ones running your updated code. The status will becomeHealthy
andSynced
.
- Go to your ArgoCD UI. Within a minute or two, it will detect the change in the
- Visit Your App: Open your browser and go to
https://my-cool-app.yourdomain.com
. You should see your updated application live!
You did it! You pushed code, and through a series of automated steps running entirely within your homelab (except for GitHub storing the code and Cloudflare providing the tunnel), your application updated automatically.
Why This Rocks
- Cost-Effective: No bills from AWS/Google Cloud/Azure for running the core deployment platform. Your only costs are your hardware, electricity, and domain name. GHCR is free for public repos and generous for private. Cloudflare Tunnel is free.
- Control & Privacy: Your code runs on your hardware. You control the environment.
- Learning: You gain invaluable experience with industry-standard tools like Docker, Kubernetes, GitOps, and CI/CD.
- Scalability: K3s can handle many applications. Need more power? Add more worker nodes (VMs) in Proxmox.
- The “Wow” Factor: It’s incredibly satisfying to build your own automated cloud platform!
What’s Next? (Beyond the Basics)
This guide gets you the core “push-to-deploy” flow. You can expand this significantly:
- Monitoring: Install tools like Prometheus and Grafana (easily via Helm charts) to see how your cluster and apps are performing.
- Automated SSL: Use
cert-manager
in K3s to automatically get free Let’s Encrypt SSL certificates for your Ingress hostnames. - Databases & Persistence: Learn about Kubernetes Persistent Volumes to run databases or store data that needs to survive pod restarts.
- Secrets Management: Use Kubernetes Secrets or tools like HashiCorp Vault for handling API keys and passwords securely.
- Backups: Implement strategies to back up your K3s state, ArgoCD configurations, and application data.
Conclusion
You’ve just transformed your Proxmox server from a simple hypervisor into the heart of a powerful, personal cloud deployment platform. By combining K3s, Docker, GitHub Actions (self-hosted), ArgoCD, and Cloudflare, you’ve achieved an automated Git-to-live workflow without the ongoing expense of traditional cloud providers. It takes some setup, but the result is a robust, educational, and incredibly useful system for any homelab enthusiast. Happy deploying!
Recent Comments