Why My Analytics Were Lying to Me (And How Kubernetes Was to Blame)
Omobayonle Ogundele
MAIN_NODE: DEVOPS_ENGINEER
I had 3,648 page views in my database. But my unique visitors count was
stuck at 1. Every single visitor looked like they came from the same IP
address: 10.42.0.8.
Something was wrong.
This is the story of how moving to Kubernetes silently broke my analytics
and what I learned about traffic visibility behind a load balancer.
The Setup
My portfolio runs on a Flask app deployed to a Kubernetes cluster on Oracle
Cloud. Traffic flows like this:
Visitor's browser
↓
Oracle Cloud (public IP)
↓
Traefik Ingress Controller (k3s built-in)
↓
Kubernetes Service
↓
Flask pod (one of 3 replicas)
When I was running the app directly with docker run, the flow was simpler:
Visitor's browser → Oracle Cloud → Flask app
Flask could see the real visitor IP directly via request.remote_addr.
The Bug
My Flask app tracks page views like this:
view = PageView(
path=request.path,
ip=request.remote_addr,
user_agent=request.headers.get('User-Agent'),
created_at=datetime.utcnow()
)
request.remote_addr — returns the IP address of whoever sent the request
to Flask.
Before Kubernetes that was the visitor's real IP. After Kubernetes it was
10.42.0.8 — the internal IP of Traefik, the ingress controller.
Every single request was coming from 10.42.0.8 because Traefik was
proxying all traffic to the Flask pods. Flask had no idea there were real
humans behind Traefik — it just saw Traefik's internal IP on every request.
Why This Happens
When traffic passes through a proxy or load balancer the original client IP
is lost unless the proxy explicitly passes it along.
Traefik does pass it along — but in an HTTP header called X-Forwarded-For,
not in the actual TCP connection that request.remote_addr reads from.
X-Forwarded-For is a standard header that proxies use to preserve the
original client IP as requests pass through multiple layers:
X-Forwarded-For: 41.58.123.45, 10.42.0.1
The first IP in the list is the original client. Subsequent IPs are the
proxies the request passed through.
So the real visitor IP was sitting in the request headers the whole time.
Flask just wasn't reading it.
What My Data Looked Like
I queried the database directly to see the damage:
kubectl exec -n portfolio deployment/portfolio -- python3 -c "
import sqlite3
conn = sqlite3.connect('/app/instance/portfolio.db')
cur = conn.cursor()
cur.execute('SELECT COUNT(*) FROM page_view')
print('Total views:', cur.fetchone()[0])
cur.execute('SELECT * FROM page_view ORDER BY id DESC LIMIT 5')
print('Latest:', cur.fetchall())
conn.close()
"
Output:
Total views: 3648
Latest: [
(3648, '/', '10.42.0.8', 'Mozilla/5.0 ...Chrome/144...', '2026-03-14 18:30:12'),
(3647, '/blog', '10.42.0.8', 'Mozilla/5.0 ...Chrome/144...', '2026-03-14 18:26:44'),
(3646, '/favicon.ico', '10.42.0.8', 'Mozilla/5.0 ...', '2026-03-14 18:26:29'),
...
]
3,648 views. Every single one from 10.42.0.8. My analytics dashboard
was showing 1 unique visitor for the entire life of the site.
The Fix
One line change in Flask:
# Before — always returns Traefik's internal IP
ip=request.remote_addr,
# After — reads real client IP from X-Forwarded-For header
ip=request.headers.get('X-Forwarded-For', request.remote_addr).split(',')[0].strip(),
Breaking this down:
request.headers.get('X-Forwarded-For', request.remote_addr)— get the
X-Forwarded-Forheader, fall back toremote_addrif it doesn't exist.split(',')[0]— take only the first IP in the list (the original client).strip()— remove any whitespace
Now Flask reads the real visitor IP that Traefik preserved in the header.
The Broader Lesson — Observability Behind a Proxy
This bug taught me something important about running applications behind
a reverse proxy or load balancer: your app no longer has direct visibility
into who is talking to it.
Every layer you add between your users and your application changes what
your application can see:
Direct connection: Flask sees real IP via remote_addr
Behind Nginx: Flask sees Nginx IP — need X-Real-IP or X-Forwarded-For
Behind Traefik: Flask sees Traefik IP — need X-Forwarded-For
Behind AWS ALB: Flask sees ALB IP — need X-Forwarded-For
Behind Cloudflare: Flask sees Cloudflare IP — need CF-Connecting-IP
Each proxy has its own way of passing the original client IP. You have to
know which proxy you're behind and which header it uses.
This applies to more than just analytics:
- Rate limiting — if you rate limit by IP and you're reading the wrong
IP, you'll rate limit the proxy instead of the actual client - Geolocation — you'll geolocate your load balancer's data center
instead of your users - Security logs — your logs will show proxy IPs instead of attacker IPs
- Personalisation — any feature that depends on knowing who the user is
by IP will be broken
How to Debug This in Any Framework
If your analytics or logs show suspicious IPs like 10.x.x.x, 172.x.x.x
or 192.168.x.x — those are private IP ranges. You're reading a proxy IP,
not a real visitor.
Check what headers are actually coming in:
# Flask — print all headers for debugging
@app.before_request
def log_headers():
print(dict(request.headers))
Look for headers like:
- X-Forwarded-For — most proxies (Nginx, Traefik, AWS ALB)
- X-Real-IP — Nginx specifically
- CF-Connecting-IP — Cloudflare
- True-Client-IP — Cloudflare Enterprise, Akamai
The Git Workflow Lesson
While pushing this fix I also ran into a common Git issue worth mentioning.
My CI pipeline (Drone) automatically updates k8s/deployment.yaml and
pushes it to Gitea after every build. So when I tried to push my fix, Gitea
rejected it because the remote was ahead of my local branch.
The fix:
git stash # temporarily save local changes
git pull origin main --rebase # fetch and replay commits on top
git stash pop # reapply your saved changes
git push origin main
git stash is like putting your work in a drawer. You clean up, pull the
latest changes, then take your work back out and continue. This is a pattern
you'll use constantly when working on a repo that has automated processes
pushing to it — like a GitOps pipeline.
What This Means for Production Systems
In a real production environment this class of bug — where a proxy silently
changes what your application sees — is surprisingly common and surprisingly
easy to miss.
The 3,648 page views were all tracked correctly. The timestamps were right.
The paths were right. The user agents were right. Only the IPs were wrong —
and if you're not specifically looking for private IP ranges in your
analytics you might never notice.
This is why observability matters. Not just "is the app up?" but "is the
app seeing what it should be seeing?"
Key Takeaways
Always check what IP your app is actually recording. Query your database
directly and look at the raw data.
Private IPs in your logs mean you're behind a proxy. 10.x.x.x,
172.16-31.x.x and 192.168.x.x are never real visitor IPs.
Each proxy layer passes client IPs differently. Know your stack and
read the right header for your setup.
GitOps pipelines push to your repo automatically. Always pull before
pushing when you have automated processes writing to your repo.
This fix is live on this site right now. If you're reading this, your
real IP is being tracked correctly — not Traefik's.
Omobayonle Ogundele
DevOps Engineer based in Lagos, Nigeria. Building reliable infrastructure and sharing logs from the edge of production.
Comments (0)
No comments yet. Be the first!