Designing a Modular OMS Architecture
Breaking down the service boundaries for order, risk, matching, core, and market data
When I started building the Order Management System (OMS), I knew I didn’t want a monolith. Trading systems are inherently complex — orders flow through multiple stages, each with its own rules, dependencies, and failure modes. To keep the design clean and extensible, I split the system into five independent services: Order, Risk, Matching, Core, and Market Data.
Why Modularity Matters
In capital markets, modularity isn’t just a nice-to-have — it’s survival. Exchanges evolve, regulations change, and new asset classes appear. By isolating responsibilities, I can:
- Swap or upgrade one service without breaking the others.
- Benchmark and scale bottlenecks independently (e.g., matching engine vs. risk checks).
- Document and explain the architecture clearly to recruiters and peers.
Service Breakdown
-
📝 Order Service
- Accepts client orders via REST (
/api/orders). - Validates payloads (symbol, side, quantity, price, time-in-force).
- Publishes valid orders to the
ordersKafka topic. - Acts as the system’s “front door” — simple, stateless, and horizontally scalable.
- Accepts client orders via REST (
-
🛡️ Risk Service
- Consumes both
ordersandquotes. - Applies pre-trade checks: price bands, max order size, credit exposure.
- Rejects invalid orders immediately, publishes only approved ones forward.
- Keeps a rolling cache of latest quotes from the Market Data Service.
- This service enforces compliance and protects the system from bad flow.
- Consumes both
-
⚖️ Matching Engine
- Maintains an in-memory order book per symbol.
- Matches buy and sell orders using price-time priority.
- Generates trades and publishes them to the
tradestopic. - Designed for low-latency, deterministic behavior — the “heart” of the OMS.
-
🗄️ Core Service
- Persists orders and trades to storage.
- Provides reporting endpoints for downstream systems.
- Simulates integration with clearing/settlement layers.
- Ensures durability and auditability of the trading lifecycle.
-
📡 Market Data Service
- Streams synthetic or replayed quotes into the
quotestopic. - Provides optional REST snapshots (
/api/quotes) for debugging. - Drives the Risk Service and Matching Engine with real-time context.
- Without this, the OMS would be blind to the market.
- Streams synthetic or replayed quotes into the
Kafka Topics and Contracts
- quotes →
{symbol, bid, ask, last, timestamp, venue} - orders →
{orderId, symbol, side, qty, price, tif, timestamp} - trades →
{tradeId, orderIds, symbol, qty, price, timestamp}
All messages use Avro schemas with schema evolution enabled. This ensures forward/backward compatibility and makes the system language-agnostic for future Rust or Python modules.
Design Principles
- Separation of Concerns: Each service has a single, well-defined responsibility.
- Event-Driven Flow: Kafka decouples producers and consumers, enabling async scaling.
- Schema Discipline: Avro + Schema Registry ensures contracts are explicit and versioned.
- Containerization: Docker Compose orchestrates infra and services with shared networks.
- Documentation-First: Architecture diagrams and README entries explain the flow clearly.
Things I Tried
- ✅ Using Spring Boot multi-modules to enforce boundaries at the code level.
- 🔄 Designing Kafka topics with Avro schemas to prevent silent contract drift.
- 🐳 Splitting Docker Compose into infra (Kafka, Schema Registry, Grafana) vs. services.
- 📖 Writing architecture docs as if explaining to a recruiter or teammate.
Reflections
This exercise taught me that architecture is about boundaries, not buzzwords. Once I drew the lines between services, the implementation details (REST vs. Kafka, Avro vs. JSON) became natural extensions of those boundaries. The result is a system that feels both realistic and extensible — something I can keep evolving as I learn more about trading technology.