How I Deployed My Portfolio to Kubernetes at Midnight
Omobayonle Ogundele
MAIN_NODE: DEVOPS_ENGINEER
It was midnight. My site was already running on Docker on an Oracle Cloud
server. The CI/CD pipeline was working. Monitoring was set up. By any
reasonable measure I should have gone to sleep.
Instead I typed curl -sfL https://get.k3s.io | sh - and started migrating
everything to Kubernetes.
This is that story.
Why Kubernetes?
Before this I was running my portfolio with a single docker run command.
Every deployment meant SSHing into the server, stopping the old container,
pulling the new image and starting a new container. It worked — but it had
problems:
- If the container crashed, it needed manual intervention to restart
- Deployments caused a few seconds of downtime
- There was no self-healing — if something went wrong at 3am, the site stayed down
- Scaling was impossible without manual work
Kubernetes solves all of these. It manages containers for you — restarting
them if they crash, rolling out updates with zero downtime and making sure
the desired state of your system always matches reality.
The Stack
Before getting into the setup, here's what I was working with:
- Oracle Cloud — free tier compute instance (Ubuntu 22.04)
- k3s — lightweight Kubernetes, perfect for a single server
- Harbor — private Docker registry running on my local homelab
- WireGuard — VPN tunnel connecting my homelab to Oracle Cloud
- Drone CI — CI/CD pipeline that builds and deploys on every git push
Step 1 — Installing k3s
k3s is a lightweight Kubernetes distribution that installs with a single command.
Unlike full Kubernetes, it runs comfortably on a small cloud instance with 1GB
of RAM.
curl -sfL https://get.k3s.io | sh -
That's it. One command installs the entire Kubernetes control plane, kubelet,
containerd and all the necessary components. After about 30 seconds:
sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
homelab-server Ready control-plane 30s v1.34.5+k3s1
Node is ready. Kubernetes is running.
Step 2 — Connecting from My Local Machine
Running kubectl on the server every time would get old fast. I needed to
control the cluster from my local machine.
k3s stores its kubeconfig at /etc/rancher/k3s/k3s.yaml. I copied that to
my local machine and changed the server address from 127.0.0.1 to 10.0.0.1
— my Oracle server's WireGuard IP.
The tricky part was that I already had two Kubernetes clusters in my kubeconfig
from previous projects (minikube). Replacing the file would have wiped them out.
Instead I merged the configs:
KUBECONFIG=~/.kube/config:~/.kube/k3s-oracle.yaml \
kubectl config view --flatten > ~/.kube/merged.yaml
mv ~/.kube/merged.yaml ~/.kube/config
Now I could switch between clusters with:
kubectl config use-context oracle # Oracle Cloud k3s
kubectl config use-context minikube # Local minikube
Step 3 — Writing the Kubernetes Manifests
Instead of docker run, Kubernetes uses YAML manifests to describe what
should run and how.
I created a k8s/ folder in my project with three files:
namespace.yaml — a logical container for all portfolio resources:
apiVersion: v1
kind: Namespace
metadata:
name: portfolio
deployment.yaml — tells Kubernetes to run 2 replicas of my app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: portfolio
namespace: portfolio
spec:
replicas: 2
selector:
matchLabels:
app: portfolio
template:
metadata:
labels:
app: portfolio
spec:
containers:
- name: portfolio
image: 10.0.0.2/library/portfolio:latest
ports:
- containerPort: 5000
volumeMounts:
- name: portfolio-data
mountPath: /app/instance
- name: portfolio-uploads
mountPath: /app/app/static/uploads
volumes:
- name: portfolio-data
hostPath:
path: /var/lib/docker/volumes/portfolio_data/_data
type: Directory
- name: portfolio-uploads
hostPath:
path: /var/lib/docker/volumes/portfolio_uploads/_data
type: Directory
service.yaml — exposes the app as a LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: portfolio
namespace: portfolio
spec:
selector:
app: portfolio
ports:
- protocol: TCP
port: 80
targetPort: 5000
type: LoadBalancer
Step 4 — The Image Pull Problem
The first deployment failed immediately:
ImagePullBackOff
Kubernetes couldn't pull the image from my Harbor registry. Two problems:
Problem 1 — Insecure registry. Harbor runs on HTTP not HTTPS. By default
Kubernetes refuses to pull from insecure registries. The fix was to tell k3s
to trust it:
sudo tee /etc/rancher/k3s/registries.yaml > /dev/null << 'EOF'
mirrors:
"10.0.0.2":
endpoint:
- "http://10.0.0.2"
configs:
"10.0.0.2":
auth:
username: admin
password: Harbor12345
EOF
sudo systemctl restart k3s
Problem 2 — Credentials. Even with the registry trusted, Kubernetes
needed credentials to pull the image. I created a pull secret:
kubectl create secret docker-registry harbor-secret \
--docker-server=10.0.0.2 \
--docker-username=admin \
--docker-password=Harbor12345 \
--namespace=portfolio
After both fixes the pods came up:
NAME READY STATUS RESTARTS AGE
portfolio-85477c8cb8-s7vdt 1/1 Running 0 32s
portfolio-85477c8cb8-xlhm9 1/1 Running 0 60s
Step 5 — Routing Traffic with Traefik Ingress
k3s ships with Traefik as its default ingress controller. Without an Ingress
resource, Traefik intercepts all traffic and returns 404.
I created ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: portfolio
namespace: portfolio
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: portfolio
port:
number: 80
Applied it and the site came back up immediately.
Step 6 — The Data Problem
After switching to Kubernetes all my blog posts, analytics and uploaded
images were gone. The new pods had no idea about the data that existed in
the old Docker volumes.
The data was still there — safely sitting in Docker's volume directories:
/var/lib/docker/volumes/portfolio_data/_data ← SQLite database
/var/lib/docker/volumes/portfolio_uploads/_data ← uploaded images
The fix was to mount those directories directly into the Kubernetes pods
using hostPath volumes — which I added to deployment.yaml. After applying
the updated manifest everything came back instantly.
Data restored. Zero data loss.
Step 7 — Updating the Pipeline
The last step was updating Drone CI to deploy via Kubernetes instead of
docker run. The old deploy step looked like this:
docker stop portfolio
docker rm portfolio
docker run -d --name portfolio ...
The new deploy step:
- name: deploy-to-kubernetes
image: appleboy/drone-ssh
settings:
host: 10.0.0.1
username: ubuntu
key:
from_secret: oracle_ssh_key
script:
- sudo kubectl set image deployment/portfolio \
portfolio=10.0.0.2/library/portfolio:latest -n portfolio
- sudo kubectl rollout status deployment/portfolio \
-n portfolio --timeout=120s
kubectl set image tells Kubernetes to update the image. kubectl rollout
status waits until the new pods are healthy before marking the deployment
as successful. If the new pods crash, Kubernetes automatically rolls back
to the previous version.
What Changed
Before Kubernetes:
- 1 container, manual restarts if it crashed
- Downtime during deployments
- No self-healing
After Kubernetes:
- 2 pods running at all times
- Zero downtime rolling updates
- Automatic restart if a pod crashes
- Persistent data properly mounted
- Traffic routed through Traefik Ingress
What I Learned
k3s is the right way to start with Kubernetes. Full Kubernetes on a
single small server is painful. k3s gives you the real thing with a fraction
of the resource overhead.
Persistent data needs explicit planning in Kubernetes. Containers are
ephemeral by design. If you don't explicitly mount your data somewhere
persistent, it disappears when the pod restarts. Always think about state.
The kubeconfig merge pattern is essential. If you work with multiple
clusters — local minikube, cloud clusters, work clusters — knowing how to
merge kubeconfig files without breaking existing access is a critical skill.
Kubernetes error messages are actually helpful. ImagePullBackOff,
CrashLoopBackOff, Pending — each status tells you exactly what's wrong
if you know where to look. kubectl describe pod is your best friend.
The ingress controller is not optional. k3s installs Traefik by default
and Traefik handles all incoming traffic. Without an Ingress resource pointing
to your service, nothing works. This caught me off guard at midnight.
The Full Pipeline Now
git push
↓
Drone CI triggers
↓
Docker image built
↓
Trivy security scan
↓
Image pushed to Harbor
↓
kubectl set image
↓
Kubernetes rolling update
↓
2 pods running the new version
↓
Site live with zero downtime ✅
What's Next
- cert-manager — automatic SSL certificates on Kubernetes
- ArgoCD — GitOps deployments, the industry standard right now
- Horizontal Pod Autoscaler — automatically scale pods based on traffic
- Alertmanager — get notified on Slack when pods crash or CPU spikes
All of that is coming. One thing at a time.
This site is deployed via this exact pipeline. If you're reading this,
Kubernetes is serving it to you right now.
Omobayonle Ogundele
DevOps Engineer based in Lagos, Nigeria. Building reliable infrastructure and sharing logs from the edge of production.
Comments (0)
No comments yet. Be the first!