With the bare metal server secured behind a private VPN and locked down at the network layer, it was time to set up a lightweight Kubernetes distribution to orchestrate workloads.
For this part, I chose K3s, and in this post, I’ll walk through how I got it running, validated it, and prepared it for secure workload deployment with SSL automation.
Why K3s?
K3s is a minimal, certified Kubernetes distribution built for production use in resource-constrained environments. It was designed by Rancher Labs and has the following advantages for my setup:
Lightweight: Small binary (~100MB), fewer moving parts
Single binary: Bundles
kubelet
,kube-proxy
,containerd
, and moreReduced memory footprint: Great for a single-node environment
Embedded SQLite by default: Eliminates the need for a separate etcd setup
Faster provisioning: I got a working control plane in under 2 minutes
Since I’m running this on a dedicated Hetzner bare metal node, I didn’t need a full-scale Kubernetes install with HA components.
Step 1: Installing K3s (Single Node)
I followed the official quick start guide for a basic single-node installation.
curl -sfL https://get.k3s.io | sh -
After a few seconds, the node was up and running with a fully functional Kubernetes API.
K3s installs the kubeconfig at /etc/rancher/k3s/k3s.yaml
. I copied it to my home directory for ease of use:
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
Step 2: Validating the K3s Cluster
To confirm that the cluster was functioning properly, I ran:
kubectl get nodes kubectl get pods --all-namespaces
The output showed the single node in Ready
state, and all system pods running smoothly. Since I installed K3s behind VPN access, all kubectl
operations ran through the private tunnel.
Step 3: Identifying the Private CNI IP
K3s uses flannel as the default CNI. Each node gets a private overlay network IP, which is critical for routing internal service-to-service traffic securely.
To fetch the node’s CNI IP:
ip -o -4 addr show | grep flannel
This IP is different from the public server IP or even the VPN IP – it’s the overlay network IP, which I’ll use for all future secure communication between internal services.
Step 4: Verifying kubectl
and helm
kubectl
comes pre-installed with K3s. To install helm
, I used:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
And confirmed it was working:
helm version
At this point, I had both cluster access (kubectl
) and package management (helm
) ready to go.
Step 5: Installing cert-manager
Using Helm
To automate TLS certificate management in the cluster, I installed cert-manager, which supports Let’s Encrypt and other issuers via ACME.
kubectl create namespace cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.14.2 \
--set installCRDs=true
I waited for all pods in cert-manager
namespace to become Running
:
kubectl get pods -n cert-manager
cert-manager gives me the ability to generate SSL certificates – both wildcard and standalone – using DNS01 challenge mode, ideal for clusters without public ingress or open ports.
What’s Next
Now that the cluster is up and cert-manager is installed, the next step is to:
Create a ClusterIssuer using Let’s Encrypt
Generate wildcard SSL certificates for a sample domain
Deploy workloads into the cluster with valid TLS termination
That’s coming up in Part 4.
References
K3s Quick Start Guide: https://docs.k3s.io/quick-start/
cert-manager Documentation: https://cert-manager.io/docs/
Helm Installation: https://helm.sh/docs/intro/install/
Useful and practical. Thanks