How I Deployed a Second Site to My Kubernetes Cluster at 5am (And Everything That Went Wrong)
Omobayonle Ogundele
MAIN_NODE: DEVOPS_ENGINEER
It was midnight. My portfolio was already running on Kubernetes. The CI/CD
pipeline was working. ArgoCD was syncing deployments automatically.
So naturally I decided to deploy a second site — a Django construction
company website called Samak Technical Consultants — to the same cluster.
What followed was five hours of the most frustrating, educational and
ultimately satisfying debugging I've ever done.
This is the full story. Every error. Every fix. Every wall I hit.
The Goal
I wanted two completely separate sites running on the same Oracle Cloud
server:
129.146.31.124:80 → StackedByBayo portfolio (Flask)
129.146.31.124:32218 → Samak Technical Consultants (Django)
Both containerized, both on Kubernetes, both with their own CI/CD pipelines,
both deploying automatically on every git push.
The Stack
- Django — backend framework
- PostgreSQL — database (running as a separate pod in Kubernetes)
- Gunicorn — WSGI server for production
- Whitenoise — static file serving
- Docker — containerization
- Drone CI — builds the image on every push
- Harbor — private Docker registry
- ArgoCD — GitOps deployment
- k3s — Kubernetes cluster on Oracle Cloud
Step 1 — Writing the Dockerfile
The Django project was using SQLite locally. First thing I needed to do was
prepare it for production — switch to PostgreSQL, add Gunicorn, handle
static files.
Updated requirements.txt:
django>=4.2
pillow>=10.0
psycopg2-binary>=2.9
gunicorn>=21.0
whitenoise>=6.6
Updated settings.py to use environment variables for the database:
import os
if os.environ.get('DATABASE_URL'):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': os.environ.get('DB_NAME', 'samak'),
'USER': os.environ.get('DB_USER', 'samak'),
'PASSWORD': os.environ.get('DB_PASSWORD', ''),
'HOST': os.environ.get('DB_HOST', 'localhost'),
'PORT': os.environ.get('DB_PORT', '5432'),
}
}
The Dockerfile:
FROM python:3.11-slim
WORKDIR /app
RUN apt-get update && apt-get install -y \
libpq-dev gcc \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
RUN python manage.py collectstatic --noinput
EXPOSE 8000
CMD ["gunicorn", "samak_technical.wsgi:application", \
"--bind", "0.0.0.0:8000", "--workers", "2"]
Built it locally — worked perfectly. First win.
Step 2 — Setting Up the CI/CD Pipeline
I already had Drone CI running for my portfolio so I created a .drone.yml
for Samak following the same pattern — clone, build, push to Harbor, update
the Kubernetes manifest so ArgoCD detects the change.
Also created an update-manifest.sh script to update the image tag in
k8s/deployment.yaml after every build:
#!/bin/sh
git config --global user.email "drone@ci.local"
git config --global user.name "Drone CI"
git clone http://Bayo:$GITEA_TOKEN@172.17.0.1:3001/Bayo/Samak-website.git /tmp/repo
sed -i "s|image:.*samak:.*|image: 10.0.0.2/library/samak:build-$1|g" \
/tmp/repo/k8s/deployment.yaml
git -C /tmp/repo add k8s/deployment.yaml
git -C /tmp/repo commit -m "ci update image to build-$1 [skip ci]"
git -C /tmp/repo push http://Bayo:$GITEA_TOKEN@172.17.0.1:3001/Bayo/Samak-website.git
Then the walls started.
Wall #1 — Wrong Repository Name
First build failed immediately:
fatal: repository 'http://172.17.0.1:3001/Bayo/samak_technical_site.git/' not found
My local folder was called samak_technical_site but the Gitea repo was
called Samak-website. I was cloning the wrong name.
Fix: Query the Gitea API to find the actual repo name:
curl -H "Authorization: token $TOKEN" \
http://172.17.0.1:3001/api/v1/repos/search?q=samak
Result: Bayo/Samak-website. Updated all references and moved on.
Wall #2 — Missing Secrets
Next build failed with:
unauthorized: unauthorized to access repository: library/samak, action: push
I had added the gitea_token secret to Drone but completely forgot to add
harbor_password. The pipeline couldn't authenticate with Harbor.
Fix: Drone → Bayo/Samak-website → Settings → Secrets → add
harbor_password.
Thirty seconds to fix. One hour to figure out.
Wall #3 — Harbor Registry Not Reachable from Drone Runner
After fixing the secrets, still getting:
Get "https://10.0.0.2/v2/": http: server gave HTTP response to HTTPS client
The Drone runner was trying to connect to Harbor over HTTPS but Harbor runs
over HTTP. My portfolio pipeline used the same IP and worked fine — so why
was Samak failing?
I spent a long time trying different approaches:
- Setting DOCKER_DAEMON_CONFIG environment variable
- Mounting Docker config files
- Using different IPs (172.18.0.9:8080, 172.16.18.128)
None of them worked. Then I realised the real issue — Harbor was completely
down. The harbor-log container was in a restart loop and none of the other
Harbor containers had started.
Fix:
cd /home/bayo/homelab/cicd/harbor
sudo docker compose up -d
Once Harbor was actually running, the pipeline worked.
Wall #4 — Gunicorn Not Found
First deploy to Kubernetes and the pod immediately went into
CrashLoopBackOff. The error from kubectl describe:
exec: "gunicorn": executable file not found in $PATH
Gunicorn wasn't in the container. I checked requirements.txt and found
only two packages:
django>=4.2
pillow>=10.0
The updated requirements.txt with gunicorn had never been saved. The Docker
image had been built without it.
Fix: Actually save the requirements.txt this time:
django>=4.2
pillow>=10.0
psycopg2-binary>=2.9
gunicorn>=21.0
whitenoise>=6.6
Rebuild, redeploy, pod comes up running.
Step 3 — The Kubernetes Manifests
With the pipeline working I needed the actual Kubernetes resources. I created
five files in a k8s/ folder:
namespace.yaml — isolate Samak from the portfolio:
apiVersion: v1
kind: Namespace
metadata:
name: samak
secret.yaml — database credentials as Kubernetes secrets:
apiVersion: v1
kind: Secret
metadata:
name: samak-secrets
namespace: samak
type: Opaque
stringData:
DB_NAME: samak
DB_USER: samak
DB_PASSWORD: Samak2024!
DB_HOST: samak-postgres
DB_PORT: "5432"
DATABASE_URL: "true"
postgres.yaml — PostgreSQL running as a pod with persistent storage:
apiVersion: apps/v1
kind: Deployment
metadata:
name: samak-postgres
namespace: samak
spec:
replicas: 1
selector:
matchLabels:
app: samak-postgres
template:
spec:
containers:
- name: postgres
image: postgres:16-alpine
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: samak-secrets
key: DB_NAME
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-data
hostPath:
path: /var/lib/samak-postgres
type: DirectoryOrCreate
deployment.yaml — the Django app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: samak
namespace: samak
spec:
replicas: 1
selector:
matchLabels:
app: samak
template:
spec:
containers:
- name: samak
image: 10.0.0.2/library/samak:latest
ports:
- containerPort: 8000
envFrom:
- secretRef:
name: samak-secrets
service.yaml + ingress.yaml — expose it via Traefik.
Step 4 — Applying to Kubernetes
kubectl apply -f k8s/
First attempt failed because the namespace wasn't ready when the deployment
tried to create. Applied again immediately and everything created successfully.
PostgreSQL pod came up healthy. Django pod started pulling the image.
Then ran the database migrations:
kubectl exec -n samak deployment/samak -- python manage.py migrate
Step 5 — Exposing the Site
Since the portfolio already owns port 80 on the server I exposed Samak on
a NodePort:
kubectl patch svc samak -n samak -p '{"spec":{"type":"NodePort"}}'
kubectl get svc samak -n samak
Got port 32218. Opened it in Oracle Cloud's security list and visited:
http://129.146.31.124:32218
Site was live.
What I Learned
Always check if your services are actually running before debugging network
issues. I spent an hour trying to fix Harbor connectivity when Harbor was
simply down.
Save your files. The gunicorn issue happened because I updated
requirements.txt in my head but never actually saved it. Always verify with
cat requirements.txt before building.
Kubernetes secrets are the right way to handle credentials. Instead of
hardcoding database passwords in environment variables or config files, they
live in Kubernetes secrets and get injected at runtime. Clean and secure.
Namespaces keep things separated. Both sites run on the same cluster but
in completely separate namespaces — portfolio and samak. They can't
interfere with each other and I can manage them independently.
The second deployment is always faster. The first time I set up a
Kubernetes deployment for my portfolio it took days. This one took one night
— most of that time spent on one networking issue. Every deployment teaches
you something that makes the next one easier.
The Full Picture
Two completely separate sites. One server. One Kubernetes cluster. Two
independent CI/CD pipelines. Both deploying automatically on every git push.
git push to Gitea
↓
Drone CI builds Docker image
↓
Image pushed to Harbor registry
↓
Drone updates k8s/deployment.yaml
↓
ArgoCD detects change and syncs
↓
Kubernetes rolls out new pods
↓
Site live — zero downtime
Is this what DevOps engineers deal with every day?
Yes. Exactly this. The difference is experience means you recognize the walls
faster and know which direction to run. Tonight I hit every wall. Next time
I'll walk straight through.
Both sites are live right now on the same Oracle Cloud free tier instance.
Zero cloud costs. Zero compromises.
Omobayonle Ogundele
DevOps Engineer based in Lagos, Nigeria. Building reliable infrastructure and sharing logs from the edge of production.
Comments (0)
No comments yet. Be the first!