My Home Lab — Network, Kubernetes, and Infrastructure
My home lab is a platform for running infrastructure I actually use, experimenting with technologies I work with professionally, and self-hosting services I’d rather not hand to third parties. The design philosophy mirrors production engineering: infrastructure as code, GitOps for state management, full observability, and automation wherever possible.
The core of the current setup is a bare-metal Kubernetes cluster managed with FluxCD, backed by a VyOS router, Proxmox hypervisors, and a full Grafana observability stack. Everything from the network switches to the 3D printers is monitored.
Networking & VyOS Link to heading
VyOS is my router and firewall of choice. It handles inter-VLAN routing, WireGuard site-to-site and road warrior VPNs, BGP/OSPF, DNS, and HAProxy-based load balancing. Inside the cluster, MetalLB handles bare-metal LoadBalancer services, and I run two Nginx ingress controllers — one external, one internal — with Istio for service mesh between workloads.
DNS filtering runs via Blocky, and certificate management is handled by cert-manager with Let’s Encrypt DNS-01 challenges via Cloudflare.
Posts I’ve written on VyOS configurations:
- VyOS - Site-to-Site VPN using WireGuard and OSPF
- VyOS - WireGuard based Road Warrior VPN Configuration
- VyOS as a Reverse Proxy Load Balancer
- How to Redirect Hardcoded DNS with VyOS
Kubernetes Cluster Link to heading
The primary compute platform is a self-hosted bare-metal Kubernetes cluster (lab-lon1-uk) with a split control plane architecture:
- 3 × etcd nodes — dedicated VMs for distributed consensus
- 3 × control plane nodes — dedicated VMs, separate from etcd
- 7 × physical worker nodes — each with at least 4 cores and 8 GB RAM
Separating etcd from the control plane gives me a realistic production topology to test against — failure scenarios, quorum behaviour, rolling upgrades, and so on. The cluster currently runs 44 applications. Kubernetes and etcd nodes run Fedora CoreOS; all other VMs run Fedora Server.
GitOps with FluxCD Link to heading
All cluster state lives in Git and is reconciled by FluxCD. The repository follows a base/overlay Kustomize pattern with a clear dependency ordering: infrastructure components are deployed first, applications after. Secrets are encrypted at rest using SOPS with HashiCorp Vault Transit as the encryption backend.
Dependency updates are automated with Renovate — patch and digest updates merge automatically on a weekly cadence, minor and major versions get a manual review.
Storage Link to heading
The lab runs a layered storage strategy:
- Rook/Ceph (SSD-backed) — distributed block storage for databases and stateful workloads requiring high IOPS
- TrueNAS — NFS and iSCSI over both SSD and HDD tiers for bulk storage
- MinIO — S3-compatible object storage for Loki chunks, Velero backups, and application data
- OpenEBS hostpath — fast local storage for ephemeral and cache workloads
Cluster backups run via Velero to S3.
Secrets Management Link to heading
HashiCorp Vault is the central secrets store for the lab. It handles:
- SOPS encryption keys for FluxCD (via Vault Transit)
- AWS KMS unseal key management
- Credentials for all infrastructure automation (Terraform, Ansible)
- Secrets injected into Kubernetes workloads at runtime
Observability Link to heading
The full Grafana stack monitors everything in the lab:
- Grafana Alloy — deployed as a DaemonSet across all nodes, collecting metrics, logs, and traces
- Prometheus — metrics collection with custom exporters
- Mimir — long-term metrics storage
- Loki — log aggregation with one-year retention, backed by MinIO
- Grafana — dashboards for everything from cluster resource usage to filament consumption
Custom exporters cover hardware-level visibility: IPMI for server health, smartctl for disk health, SNMP for network devices, and a Klipper exporter for the 3D printers.
Self-Hosted Services Link to heading
A selection of what runs in the cluster:
Infrastructure & DevOps
- GitLab — source control and CI/CD for all homelab code
- Harbor — private container registry
- AWX — Ansible automation
- Atlantis — Terraform pull request automation
- Netbox — IPAM and infrastructure documentation
- Unifi — network controller
- Vault — secrets management
Communication & Social
- Mastodon — federated social network (@mhamzahkhan@intahnet.co.uk)
- Matrix — self-hosted federated chat
- Mailu — self-hosted mail server
Media
- Jellyfin and Plex — media servers
AI & Automation
- Ollama and LocalAI — local LLM inference
- KubeAI — model serving platform
- OpenClaw — personal AI assistant gateway (multi-channel: connects to messaging platforms, runs locally)
- N8N — workflow automation
Home
- Home Assistant — home automation
- Obico — 3D printer monitoring and failure detection
- Spoolman — filament inventory tracking
Infrastructure as Code Link to heading
All lab infrastructure is defined as code:
- FluxCD — Kubernetes cluster state (GitOps)
- Terraform — DNS zones (Cloudflare, 12 domains), AWS (SES email, IAM, KMS), Discord server provisioning, domain registration (Namecheap). State stored in MinIO.
- Ansible — server provisioning and configuration for Fedora Server hosts
- Packer — builds a monthly golden image for Fedora Server VMs, used as the base for all non-CoreOS nodes
3D Printing Link to heading
I run two Ender 3 printers, both on Klipper controlled by Raspberry Pi 3B+ systems running MainsailOS. They’re managed like infrastructure — printer.cfg in Git, print stats flowing into Grafana via the Klipper exporter, and Obico for remote monitoring and failure detection.
Printer 1 — original Ender 3, heavily modified:
- Mainboard: BTT SKR Mini E3 v3
- Hotend: Trianglelabs all-metal with titanium heatbreak, Mellow heat block
- Nozzle: Creality 0.4 mm hardened steel
- Probe: BIQU Microprobe v2
- Input shaping: ADXL345 accelerometer
- Extruder: Bowden
Printer 2 — second Ender 3, different configuration:
- Mainboard: BTT SKR Mini E3 v2
- Hotend: Trianglelabs all-metal with titanium heatbreak, Mellow heat block
- Nozzle: Creality 0.4 mm hardened steel
- Probe: BIQU Microprobe v2
- Extruder: Microswiss Direct Drive
- Z-axis: dual motors
- Input shaping: ADXL345 accelerometer
Posts on the 3D printing journey:
Historical Home Lab Link to heading
Earlier posts from when the lab ran on Cisco hardware and Mikrotik routers: