Dockerizing a Multi-Service OMS

Orchestrating infrastructure and services with Docker Compose

  • Docker
  • DevOps
  • Microservices
  • Backend

Once the OMS services were functional, the next challenge was orchestration. Running five microservices (Order, Risk, Matching, Core, Market Data) plus infrastructure (Kafka, Schema Registry, Grafana, Prometheus) manually was painful. I needed a way to spin up the entire ecosystem with a single command. The solution: Docker Compose.


Why Dockerize?

  • Consistency: Every service runs in the same environment across machines.
  • Reproducibility: Recruiters, teammates, or future me can spin up the system without setup headaches.
  • Isolation: Each service has its own container, avoiding dependency conflicts.
  • Scalability: Containers can be scaled horizontally (docker-compose up --scale order=3).

In short, Docker Compose turned my OMS from a collection of services into a cohesive, reproducible system.


Architecture of the Compose Setup

I split the Compose configuration into two layers:

  1. Infrastructure Layer

    • Kafka + Zookeeper: Messaging backbone.
    • Schema Registry: Avro contract enforcement.
    • Grafana + Prometheus: Monitoring and dashboards.
    • Kafka UI: Optional, for debugging topics.
  2. Service Layer

    • Order Service: REST API for order intake.
    • Risk Service: Consumes orders + quotes, applies checks.
    • Matching Engine: Matches orders, publishes trades.
    • Core Service: Persists orders/trades, exposes reporting.
    • Market Data Service: Streams quotes into Kafka.

Each service has its own Dockerfile, built from a shared base image (Java 21 + Spring Boot runtime).


Key Compose Features I Used

  • Networks: A shared bridge network (oms-net) so services can discover each other by name.
  • Volumes: Persistent storage for Kafka logs and Grafana dashboards.
  • Depends_on: Ensures infra (Kafka, Schema Registry) starts before services.
  • Profiles: Split into infra and services profiles, so I can run infra alone for debugging.
  • Environment Variables: Centralized configs for Kafka brokers, Schema Registry URLs, and service ports.
  • Healthchecks: Added to Kafka and Schema Registry to ensure readiness before dependent services start.

Example Workflow

Start everything:

docker-compose up -d

View logs (for Order Service):

docker-compose logs -f order-service

Rebuild a specific service after code changes:

docker-compose up -d --build order-service

Stop all containers:

docker-compose up -d --build order-service

Inspect the running setup:

docker ps
docker network inspect oms-net

Debugging and Testing

Once the stack was up, I used Postman to send test orders:

POST http://localhost:8080/api/orders
Content-Type: application/json
{
  "symbol": "AAPL",
  "quantity": 100,
  "price": 182.45,
  "side": "BUY"
}

Verifying Message Propagation

Everything ran smoothly — seeing trades appear in Core Service within milliseconds was immensely satisfying.


Optimizations & Learnings

  • 🗂️ Used a common .env file to centralize environment variables (ports, broker URLs).
  • ⚙️ Enabled JVM heap limits per service using JAVA_OPTS to prevent memory starvation.
  • 🔄 Mounted local volumes for hot‑reloading configs during testing.
  • 🧩 Leveraged Compose profiles (infra, services, monitoring) to selectively bring up components.
  • ♻️ Introduced container restart policies (restart: on-failure) for resilience.

Final Thoughts

Docker Compose transformed my Order Management System from a cluster of manual scripts into a single‑command distributed environment.
It not only improved my productivity but also made the system reproducible, testable, and portable — crucial for showcasing backend engineering work.