Order Management System (OMS)

Event-driven trading OMS with modular microservices and real-time market data

  • Java 21
  • Spring Boot
  • Kafka
  • Avro
  • Docker
  • JMeter
  • Grafana

A long-running flagship project: a modular, event-driven Order Management System designed for capital markets. It simulates the full trade lifecycle — from order intake and risk checks to matching, trade generation, and reporting — while integrating real-time market data streams.

Goals and Motivation

The motivation behind this project was to deepen my expertise in backend architecture for trading systems and to build a realistic, production-grade platform that demonstrates:

  • Event-driven design principles
  • Modular service boundaries
  • Real-time data integration
  • Extensibility for future enhancements (e.g., Rust modules, Kubernetes scaling)

It also serves as a portfolio anchor, showcasing my ability to design, implement, and document complex distributed systems with recruiter-ready clarity.

System Architecture

The OMS is composed of five core services, each containerized and orchestrated with Docker Compose:

  • Order Service
    Handles order intake via REST (/api/orders), validates payloads, and publishes to the orders Kafka topic.

  • Risk Service
    Consumes both orders and quotes topics. Performs pre-trade risk checks (e.g., price bands, quantity limits) using the latest market data before forwarding valid orders.

  • Matching Engine
    Maintains an in-memory order book, consumes validated orders, and generates trades by matching buy/sell orders. Publishes trades to the trades topic.

  • Core Service
    Persists orders and trades to storage, provides reporting endpoints, and simulates downstream integration with clearing/settlement systems.

  • Market Data Service
    Streams synthetic or replayed quotes into the quotes topic. Provides optional REST snapshots (/api/quotes) for debugging and validation.

Kafka Topics and Schemas

  • quotes → symbol, bid/ask, last price, timestamp
  • orders → orderId, symbol, side, quantity, price, tif, timestamp
  • trades → tradeId, orderIds, symbol, quantity, price, timestamp

All messages use Avro schemas with schema evolution support for forward/backward compatibility.

Key Features

  • 🧩 Microservice modularity: Clear separation of concerns across order, risk, matching, core, and market data services.
  • 🔄 Event-driven architecture: Kafka as the backbone for asynchronous, decoupled communication.
  • 🐳 Containerized deployment: Docker Compose orchestrates infra (Kafka, Schema Registry, Grafana) and services with shared networks and startup sequencing.
  • 📡 Market data integration: Real-time quotes drive risk checks and matching logic, simulating real-world trading conditions.
  • 🛡️ Risk validation: Pre-trade checks ensure only valid orders reach the matching engine.
  • Matching engine: Implements price-time priority matching with trade generation.
  • 🗄️ Persistence and reporting: Core service stores trades and provides reporting APIs.
  • 📖 Documentation-first approach: Architecture diagrams, setup instructions, and benchmarking workflows included.
  • 🛠️ Clean design principles: DTO layering, MapStruct mapping, and separation of concerns for maintainability.

Development and Setup

  • Languages & Frameworks: Java 21, Spring Boot, Maven multi-module project.
  • Messaging: Kafka with Avro schemas and Schema Registry.
  • Infrastructure: Docker Compose for infra + services, with environment-specific application-docker.properties.
  • Monitoring: Grafana dashboards for JVM metrics, Kafka lag, and throughput.
  • Testing & Benchmarking: JMeter for load generation; synthetic quote generator for market data.

Lessons Learned

  • Event-driven design: Learned to balance modularity with reliability in distributed systems.
  • Schema evolution: Gained practical experience with Avro and the importance of compatibility in cross-service communication.
  • Real-time data pipelines: Understood how market data feeds drive risk and matching logic in trading systems.
  • Infrastructure orchestration: Containerizing and sequencing multi-service environments taught me resilience and debugging across OSes.
  • Documentation as a skill: Writing recruiter-ready docs and diagrams sharpened my ability to present complex systems clearly.