BACK TO ENTRIES
LOG_ENTRY: 2026.03.14 · 10 TELEMETRY_HITS

How I Completed My GitOps Pipeline with ArgoCD

B

Omobayonle Ogundele

MAIN_NODE: DEVOPS_ENGINEER

I had Kubernetes running. Deployments were working. But I was still SSHing
into the server and running kubectl apply manually after every build.

That's not GitOps. That's just Kubernetes with extra steps.

GitOps means your Git repository is the single source of truth for your
infrastructure. You push code, Git changes, and the cluster automatically
syncs itself to match. No SSH. No manual deploys. No "did I remember to
apply that change?"

That's what ArgoCD does. And this is how I set it up.


What is ArgoCD?

ArgoCD is a continuous delivery tool for Kubernetes. It watches a Git
repository and automatically applies any changes to your cluster.

The key difference from a traditional pipeline is the direction of the
deployment:

Traditional CD (push-based):

Pipeline SSHes into server → runs kubectl apply → cluster updated

GitOps with ArgoCD (pull-based):

ArgoCD watches Git → detects changes → pulls and applies → cluster updated

With ArgoCD, the cluster pulls its desired state from Git instead of being
pushed to. This means:

  • Git is always the source of truth
  • If someone manually changes something on the cluster, ArgoCD detects the
    drift and corrects it automatically
  • You have a complete audit trail of every deployment in your Git history
  • Rolling back is just reverting a commit

My Setup Before ArgoCD

Before ArgoCD my Drone CI pipeline had three steps:

  1. Clone the code
  2. Build and push the Docker image to Harbor
  3. SSH into the Oracle server and run kubectl set image

Step 3 was the problem. It worked but it meant my pipeline needed SSH
access to the production server. It also meant the deployment logic lived
inside the pipeline, not in Git.


Installing ArgoCD on k3s

ArgoCD installs as a set of Kubernetes resources. One command:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

After about 2 minutes all 7 pods were running:

sudo kubectl get pods -n argocd
NAME                                               READY   STATUS
argocd-application-controller-0                    1/1     Running
argocd-applicationset-controller-974d64569-czs79   1/1     Running
argocd-dex-server-66fc67645-ltcsn                  1/1     Running
argocd-notifications-controller-5474d4cbb6-9sgvd   1/1     Running
argocd-redis-6888c8c66f-msslk                      1/1     Running
argocd-repo-server-6c4975f4ff-4rc8n                1/1     Running
argocd-server-5f7ff864d5-xx7mz                     1/1     Running

Accessing the ArgoCD UI

By default ArgoCD's server is only accessible inside the cluster. I exposed
it as a NodePort:

sudo kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}'
sudo kubectl get svc argocd-server -n argocd

This gave me port 30455. I also had to open that port in Oracle Cloud's
security list under Networking → VCN → Security Lists → Add Ingress Rule.

Then I got the initial admin password:

sudo kubectl -n argocd get secret argocd-initial-admin-secret \
  -o jsonpath="{.data.password}" | base64 -d && echo

ArgoCD UI was live at http://129.146.31.124:30455.


Connecting ArgoCD to Gitea

My Git server is a self-hosted Gitea instance running on my local homelab,
accessible via WireGuard VPN at 10.0.0.2:3001.

In the ArgoCD UI:

  1. Settings → Repositories → Connect Repo
  2. Connection method: HTTPS
  3. URL: http://10.0.0.2:3001/Bayo/Porfolio-website.git
  4. Username and password: my Gitea credentials
  5. Click Connect → Successful

ArgoCD could reach Gitea through the WireGuard tunnel.


Creating the Application

In ArgoCD an Application is the link between a Git repo path and a
Kubernetes namespace. I created one for my portfolio:

App Details:
- Name: portfolio
- Project: default
- Sync Policy: Automatic with Prune and Self Heal enabled

Source:
- Repository: http://10.0.0.2:3001/Bayo/Porfolio-website.git
- Revision: HEAD
- Path: k8s

Destination:
- Cluster: https://kubernetes.default.svc
- Namespace: portfolio

The moment I created the app ArgoCD synced the k8s/ folder and all my
Kubernetes resources appeared in a live tree view — deployment, service,
ingress, all healthy.


The Missing Piece — Updating the Image Tag

ArgoCD watches the k8s/ folder for changes. But my deployment.yaml was
still pointing to portfolio:latest. ArgoCD had no way of knowing when a
new image was pushed to Harbor.

The fix was to make Drone CI update the image tag in deployment.yaml after
every build, then push that change back to Gitea. ArgoCD would detect the
YAML change and sync automatically.

I wrote a shell script update-manifest.sh:

#!/bin/sh
git config --global user.email "drone@ci.local"
git config --global user.name "Drone CI"
git clone http://172.17.0.1:3001/Bayo/Porfolio-website.git /tmp/repo
sed -i "s|image:.*portfolio:.*|image: 10.0.0.2/library/portfolio:build-$1|g" \
  /tmp/repo/k8s/deployment.yaml
git -C /tmp/repo add k8s/deployment.yaml
git -C /tmp/repo commit -m "ci update image to build-$1 [skip ci]"
git -C /tmp/repo push http://Bayo:$GITEA_TOKEN@172.17.0.1:3001/Bayo/Porfolio-website.git

Notice [skip ci] in the commit message — this prevents Drone from
triggering itself in an infinite loop when it pushes the updated manifest.

And the updated pipeline step:

  - name: update-k8s-manifest
    image: alpine/git
    environment:
      GITEA_TOKEN:
        from_secret: gitea_token
    commands:
      - sh update-manifest.sh ${DRONE_BUILD_NUMBER}

I created a Gitea access token with write:repository permissions and
stored it as a Drone secret called gitea_token.


The Complete GitOps Flow

Once everything was wired together the full flow looked like this:

git push to Gitea
        
Drone CI triggers
        
Docker image built  pushed to Harbor as build-60
        
Drone updates k8s/deployment.yaml:
  image: 10.0.0.2/library/portfolio:build-60
        
Drone pushes updated YAML to Gitea [skip ci]
        
ArgoCD detects change in k8s/ folder
        
ArgoCD applies updated deployment to cluster
        
Kubernetes rolling update  3 pods updated one by one
        
Site live with zero downtime 

I never touched the server. I just pushed code and everything else happened
automatically.


What Self Heal Means in Practice

I enabled Self Heal on the ArgoCD application. This means if anyone
manually changes something on the cluster — scaling pods, editing a
configmap, anything — ArgoCD will detect the drift from what's in Git and
revert it within a few minutes.

Git is the truth. The cluster is just a reflection of Git.

This is a fundamental shift in how you think about infrastructure. Instead
of "what is running on my server right now?" the question becomes "what does
my Git repo say should be running?"


Debugging ArgoCD and Gitea Connectivity

One issue I ran into: ArgoCD kept saying the app path didn't exist even
though the k8s/ folder was in the repo.

I verified ArgoCD could actually reach Gitea by spinning up a temporary
Alpine pod inside the cluster and fetching the file directly:

sudo kubectl run test --image=alpine --rm -it --restart=Never -n argocd \
  -- wget -qO- http://10.0.0.2:3001/Bayo/Porfolio-website/raw/branch/main/k8s/deployment.yaml

It returned the file contents — so the network was fine. The issue turned
out to be that the k8s/ folder hadn't been pushed to Gitea yet. Once I
pushed the manifests the app synced immediately.

Always verify the basics before assuming the tool is broken.


What Changed

Before ArgoCD:
- Pipeline SSHed into server after every build
- Manual kubectl apply if something went wrong
- No visibility into what was actually deployed vs what was in Git

After ArgoCD:
- Zero SSH in the deployment process
- Every deployment is a Git commit — full audit trail
- Drift detection — cluster always matches Git
- Visual tree view of all Kubernetes resources in real time
- One click rollback by reverting a commit


My Full Pipeline Now

Code          Gitea (self-hosted)
CI            Drone CI (build + test + push)
Registry      Harbor (private Docker registry)
Security      Trivy (vulnerability scanning)
GitOps        ArgoCD (watches Git, syncs cluster)
Orchestration  Kubernetes k3s (runs the app)
Monitoring    Prometheus + Grafana
Alerting      Alertmanager  Slack

Every component is self-hosted on my homelab or Oracle Cloud free tier.
Zero cloud costs. Zero compromises.


What's Next

  • cert-manager — automatic SSL certificates on Kubernetes
  • Helm charts — package the Kubernetes manifests properly
  • Horizontal Pod Autoscaler — scale pods based on traffic automatically

One thing at a time. 🚀


This site is deployed via this exact pipeline. Every blog post you read
here was delivered by ArgoCD syncing a Kubernetes cluster at midnight.

Follow the journey on Twitter or
connect on LinkedIn.

B

Omobayonle Ogundele

DevOps Engineer based in Lagos, Nigeria. Building reliable infrastructure and sharing logs from the edge of production.

Comments (0)

No comments yet. Be the first!

LEAVE_RESPONSE