Deploying a Containerized E-commerce App on AWS EC2: A Real DevOps Walkthrough
Omobayonle Ogundele
MAIN_NODE: DEVOPS_ENGINEER
Most tutorials stop at "it works on my machine."
This one doesn't.
This project was about understanding the entire deployment flow from scratch —
not just writing code, but shipping it. Code → version control → containerization
→ registry → cloud server → running services → observability. Every step, end to end,
on a real AWS EC2 instance.
If you've ever wondered what actually happens between writing code and having it
live on the internet — this is that walkthrough.
Repository: github.com/BayoJohn/Project-2
The Problem I Wanted to Solve
When I started learning DevOps I kept running into the same wall — everything
worked locally but the moment deployment came up, the questions multiplied:
- How does the application get packaged for a server?
- How does the server get the right version of the application?
- How are multiple services started reliably together?
- How do you know if something breaks at 3am?
I could have kept watching tutorials that answered these questions theoretically.
Instead I built a system that answered them practically.
The Stack
Before getting into the walkthrough, here's everything involved:
- Application — E-commerce app with multiple services
- Version Control — GitHub
- Containerization — Docker + Docker Compose
- CI/CD — GitHub Actions
- Registry — Docker Hub
- Cloud — AWS EC2 (Ubuntu)
- Observability — Grafana + Prometheus
Step 1 — Version Control as the Foundation
Everything starts with Git. I created a GitHub repository to serve as the
single source of truth for both the application code and the infrastructure
configuration.
This matters more than people realise. Version control isn't just backup —
it's traceability. Every change to the system has a history. When something
breaks in production (and it will), that history is what saves you.
The repository structure kept application code and Docker/Compose configuration
together so the entire system could be reproduced from a single git clone.
Step 2 — Containerizing with Docker
I containerized the application with Docker because consistency is
non-negotiable in deployment.
Without containers, you're betting that the server has the same OS, the same
language runtime, the same dependencies and the same configuration as your
local machine. That bet loses constantly.
With Docker, the container is the environment. It runs the same way everywhere —
on your laptop, on a colleague's machine, on an EC2 instance in a data center
in Virginia.
Docker Compose handled the multi-service orchestration. Instead of
starting services manually one by one, I declared the entire system in a
single docker-compose.yml:
services:
app:
image: bayojohn/ecommerce-app:latest
ports:
- "80:3000"
depends_on:
- db
- redis
db:
image: postgres:15
environment:
POSTGRES_DB: ecommerce
POSTGRES_PASSWORD: ${DB_PASSWORD}
redis:
image: redis:alpine
prometheus:
image: prom/prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
image: grafana/grafana
ports:
- "3001:3000"
One file. Six services. One command to start everything.
Step 3 — Automating Builds with GitHub Actions
Manual builds are a trap. You forget a step, you build on the wrong branch,
you push an untested image. Automation removes the human error.
I set up GitHub Actions to automatically build and push the Docker image
to Docker Hub on every push to main:
name: Build and Push
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@v4
with:
push: true
tags: bayojohn/ecommerce-app:latest
Every time code is pushed:
1. GitHub spins up a fresh Ubuntu runner
2. Builds the Docker image
3. Pushes it to Docker Hub
4. The server can pull the latest image anytime
The server never compiles code. The local machine never pushes images manually.
The pipeline does it all.
Step 4 — Docker Hub as the Bridge
Docker Hub is the registry — the bridge between where images are built and
where they run.
After a successful GitHub Actions run the image is tagged and available at
docker.io/bayojohn/ecommerce-app:latest. The EC2 instance doesn't need
access to the source code, the build environment or the developer's laptop.
It just needs to run:
docker pull bayojohn/ecommerce-app:latest
And it gets exactly the image that was built and tested in CI.
Step 5 — Provisioning AWS EC2
For the server I used an AWS EC2 instance — a t2.micro running Ubuntu 22.04.
EC2 was a deliberate choice. Managed services like ECS or App Runner abstract
a lot of complexity away. EC2 forces you to deal with that complexity directly:
security groups, SSH keys, instance profiles, networking. That friction is where
understanding comes from.
After launching the instance:
# Connect via SSH
ssh -i keypair.pem ubuntu@<ec2-public-ip>
# Update packages
sudo apt update && sudo apt upgrade -y
# Install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker ubuntu
# Install Docker Compose
sudo apt install docker-compose -y
# Verify
docker --version
docker compose version
Security groups were configured to allow:
- Port 22 (SSH) — from my IP only
- Port 80 (HTTP) — from anywhere
- Port 3001 (Grafana) — from anywhere
Step 6 — Deploying the Application
With Docker on the server, deployment was clean and repeatable:
# Clone the repository
git clone https://github.com/BayoJohn/Project-2.git
cd Project-2
# Pull the latest images
docker compose pull
# Start all services
docker compose up -d
# Verify everything is running
docker ps
The output of docker ps showed all 6 containers running:
CONTAINER ID IMAGE STATUS
a1b2c3d4e5f6 bayojohn/ecommerce-app:latest Up 2 minutes
b2c3d4e5f6a1 postgres:15 Up 2 minutes
c3d4e5f6a1b2 redis:alpine Up 2 minutes
d4e5f6a1b2c3 prom/prometheus Up 2 minutes
e5f6a1b2c3d4 grafana/grafana Up 2 minutes
f6a1b2c3d4e5 nginx:alpine Up 2 minutes
All services up. All healthy. Application live.
Step 7 — Observability with Grafana and Prometheus
Deployment without observability is flying blind. You don't know if the
server is struggling, if requests are failing or if you're about to run
out of memory.
I included Prometheus and Grafana in the stack from the start — not as
an afterthought.
Prometheus scrapes metrics from the application and the host machine
every 15 seconds. Grafana turns those metrics into dashboards you can
actually read.
The Grafana dashboard showed:
- CPU usage across all containers
- Memory consumption per service
- Network traffic in and out
- Container uptime
Seeing the system behave in real time — watching CPU spike when traffic
hits, watching memory settle after a restart — that's a different kind of
understanding than reading about it.
What This Project Actually Taught Me
Docker Compose is underrated.
People rush to Kubernetes before they understand Compose. For most projects
Compose is sufficient and far simpler. Master it first.
GitHub Actions is powerful and approachable.
The YAML syntax feels verbose at first but the model is straightforward:
trigger → jobs → steps. Once you understand that, you can automate almost
anything.
EC2 is the best place to learn cloud.
Managed services are great for production but terrible for learning. EC2
forces you to think about networking, security, and the OS. That knowledge
transfers to every other cloud service.
Observability is not optional.
I added Grafana because I wanted to. But after seeing the dashboards I
can't imagine deploying without it. Knowing what your system is doing is
as important as knowing it's running.
Breaking things on a real server teaches more than any tutorial.
I ran into port conflicts, permission errors, security group misconfigurations
and container networking issues. Every one of those failures taught me
something a tutorial never would.
The Full Flow
Here's the complete deployment flow from a single git push to a live application:
Developer pushes to main
↓
GitHub Actions triggers
↓
Docker image built and tested
↓
Image pushed to Docker Hub
↓
EC2 pulls latest image
↓
Docker Compose restarts services
↓
Grafana confirms everything is healthy
↓
Application is live ✅
What's Next
This project used Docker Compose on a single EC2 instance. The natural next
steps are:
- Multiple instances behind a load balancer for high availability
- ECS or Kubernetes for container orchestration at scale
- RDS instead of a containerized database for production data reliability
- Automated EC2 deployment triggered by the GitHub Actions pipeline
so the server pulls and restarts automatically on every push
I'll be building each of these out and documenting everything here.
If you're learning DevOps my advice is the same as it's always been:
deploy something real. Deal with the registries, the ports, the servers
and the failures. That's where the abstractions disappear and the real
understanding begins.
Repo: github.com/BayoJohn/Project-2
Found this useful? Drop a comment or connect with me on
LinkedIn or
Twitter.
Always happy to talk DevOps.
Omobayonle Ogundele
DevOps Engineer based in Lagos, Nigeria. Building reliable infrastructure and sharing logs from the edge of production.
Comments (0)
No comments yet. Be the first!