System Design Fundamentals

Decomposing into Services

A

Decomposing into Services

The Decomposition Decision

You’ve decided: microservices are the answer. Your monolith has sprawled into a tangled mess. User management, order processing, inventory, notifications, payments, and reporting all live together, sharing databases, triggering cascading changes, and forcing deployments of the entire system for a single feature. Teams step on each other’s toes. Scaling is all-or-nothing. The database is a shared kitchen where everyone’s reaching for the same ingredients.

So where do you draw the lines? Where does the order service end and the inventory service begin? Should payments be its own service, or should it live in the order service? What about notifications — that low-priority task that’s used everywhere?

This is the central puzzle of microservices architecture. Split too finely and you’ve built a distributed monolith where every request cascades through ten services, network latency dominates, and debugging requires following traces across a dozen logs. Split too coarsely and you’ve barely improved on your original monolith — you still have teams blocked waiting for each other, and you’ve paid the operational tax of distributed systems without the benefits.

Getting decomposition right is the single most important decision you’ll make. It shapes your entire architecture, your team structure, your deployment strategy, and your operational burden for years to come. This chapter shows you how to think about it.

Decomposition Strategies

There’s no single “correct” way to decompose a system. Different strategies surface different service boundaries, and the best one depends on your organization and business model.

Decompose by Business Capability

Think in terms of what your business does, not how your technology is organized. Amazon doesn’t have a “relational database service” and a “queue service” — they have “orders,” “inventory,” “billing,” “recommendations.” Each business capability becomes a service.

This strategy aligns service boundaries with organizational structure. You can assign a team to own a service end-to-end: they understand the business logic, they own the deployment, they own the data model. Conway’s Law becomes an asset instead of a constraint — your system architecture mirrors your team structure because it should.

Example: An e-commerce platform decomposes into:

  • Order Service: handles order creation, status tracking, order history
  • Inventory Service: manages stock levels, reservations, reorder logic
  • Billing Service: invoicing, payment processing, subscription management
  • Notification Service: sending emails, SMS, push notifications
  • Shipping Service: carrier integration, tracking, delivery scheduling

Each service owns its data and exposes a clear API. A team can understand and modify one service without understanding the entire system.

Decompose by Subdomain (Domain-Driven Design)

Domain-Driven Design gives us a framework. Eric Evans identified three types of subdomains:

  • Core Subdomains: what makes your business unique and valuable. These typically become services. Your competitive advantage lives here.
  • Supporting Subdomains: necessary for the business but not differentiating. Often good candidates for service extraction — you can own them fully without expertise from your differentiating domain.
  • Generic Subdomains: solved problems that commodities exist for. User authentication, email sending, logging — consider buying, not building.

For an e-commerce company, “Recommendation Engine” is core; “Order Management” is supporting; “User Authentication” is generic. This helps you prioritize and makes decomposition decisions more strategic.

Decompose by Use Case or User Journey

Follow the customer journey. What happens when a customer browses products? Orders something? Returns it? Requests a refund? Each major workflow becomes a service boundary.

This strategy makes sense when use cases are distinct and involve different teams. A shipping company might decompose by: order placement, route planning, on-delivery coordination, customer communication.

The Strangler Fig Pattern: Incremental Decomposition

You’re not starting green. You have a monolith, and it’s in production. Rewriting from scratch is risky and expensive. The Strangler Fig pattern lets you extract services incrementally while keeping the system running.

The idea: gradually replace pieces of the monolith with new microservices. Start with a single service at the edges (not the core), route traffic through it, retire the monolith piece. Rinse and repeat.

Here’s the workflow:

  1. Identify a candidate: Find a module with clear boundaries and loose coupling to the rest. First candidates are often read-heavy, non-critical services (recommendations, analytics, notifications).
  2. Build the new service: Write it from scratch. Use better technology if it makes sense. Own the data.
  3. Install routing: Deploy an API gateway, proxy, or service mesh that routes traffic to the new service for relevant requests. The old monolith handles the rest.
  4. Migrate data: Dual-write if necessary. The monolith writes to both its database and the new service’s database until you’re confident. Eventually, the service becomes the source of truth.
  5. Retire the old code: Once fully migrated, delete the extraction from the monolith.

This is lower-risk than a rewrite because you’re running parallel systems. If the new service fails, you fall back to the monolith. You can test assumptions and learning during the extraction.

What’s the Right Service Size?

There’s a spectrum. Too small, and you’ve built a distributed monolith. Too large, and you haven’t solved your original problem.

The Two-Pizza Team Heuristic

Amazon’s rule: a team should be small enough to be fed by two pizzas. Why? A team of 6–10 people can still have synchronous communication, make decisions quickly, and own their service end-to-end.

If your service requires more than one team to understand and change, it’s too big. If multiple teams are constantly coordinating changes, you’ve re-created the monolith’s problem in distributed form.

Service Granularity Spectrum

┌─────────────────────────────────────────────────────────────┐
│ Too Coarse (Monolith)  →  Goldilocks  →  Too Fine (Nano)   │
├─────────────────────────────────────────────────────────────┤
│ - Hard to scale         - Independent deployment   - Network │
│ - Team bottlenecks      - Clear boundaries         - Overhead │
│ - Tangled coupling      - Team ownership           - Debugging│
│ - Long deployment       - Flexible scaling         - Complexity
└─────────────────────────────────────────────────────────────┘

At the right granularity:

  • One team owns a service and can deploy it independently
  • It scales independently (you scale order processing without scaling inventory)
  • It has a clear API and minimal dependencies on other services
  • It owns its data — no shared databases

Identifying Service Boundaries: A Decision Framework

Look for these signals:

SignalWhat It Means
Independent deployment needsTwo features need different release cycles. Split them.
Different scaling requirementsOne module gets pounded; the other is quiet. Split them.
Technology mismatchOne piece needs NoSQL, another SQL. One needs Python, another Go. Split them.
Different team ownershipOne team owns payments; another owns recommendations. Split them.
Data change patternsData that changes together should live together. If modified separately, split.
Regulatory isolationPII data for GDPR compliance. Financial data for audit. Separate.
Failure domainIf this service goes down, does the whole system fail? Isolate it.

The Decomposition Pitfalls

Anti-Pattern: The Distributed Monolith

You’ve split services, but they can’t move independently. Every deploy affects three other services. Database schemas are synchronized across the boundary. You’ve built a monolith with the operational tax of distributed systems.

This happens when you extract services but keep them tightly coupled through:

  • Shared databases (the worst offender — two services can’t evolve their schemas independently)
  • Synchronous call chains (Service A calls B calls C; latency stacks)
  • Coordinated deployments (services must deploy in a specific order)
  • Shared code libraries (changes ripple)

Fix: Each service owns its data. Use asynchronous communication (events, queues). Deploy services independently.

Anti-Pattern: Nano-Services

You’ve over-decomposed. A service for user preferences. A service for user settings. A service to aggregate them. Network overhead dominates. Every request bounces between three services. Latency is terrible. Debugging is a nightmare.

This typically happens when you decompose by entity (one service per database table) rather than by capability.

Fix: Coarsen boundaries. Accept that one service might do multiple things if they’re related, owned by the same team, and deployed together.

Anti-Pattern: The Data Trap

You’ve decomposed by database table. One service owns the users table, another owns orders. But orders need user data. So the service makes a synchronous call. Then the user service needs order history, so it calls back. You’ve created circular dependencies and tight coupling.

Fix: Denormalize. The order service can have a denormalized copy of user information (name, email, address). It’s not the source of truth, but it’s sufficient for orders. Update it asynchronously when the user service changes user data.

Dependency Tangles

Before you split, map the dependencies. Which modules call which? A dependency matrix or call graph is revealing. If you extract a service that depends on six other modules, you haven’t won anything.

Pro tip: Tools like Lattix or NDepend can generate dependency matrices automatically. Look for modules that are relatively isolated — they’re good extraction candidates.

Recognizing Data Ownership

This is critical. Decide which service owns which data. Only that service writes to that data. Other services can read, but they can’t modify.

Example:

  • Order Service owns the orders table. Only it writes.
  • Shipping Service reads the orders table. When it updates tracking information, it updates a separate shipment_tracking table that it owns.
  • Billing Service reads the orders table for invoice generation. But it owns the invoices table.

If two services write to the same table, you have a race condition and shared ownership — a consistency nightmare.

This is why shared databases are toxic. A shared database doesn’t enforce data ownership; it’s a free-for-all.

Decomposing an E-Commerce Monolith: A Walkthrough

Let’s decompose a real system. You have a monolith:

User Management → Order Processing → Inventory → Payments → Notifications → Reporting
                 (everything calls everything)

Step 1: Dependency Analysis

Map what depends on what:

ServiceDepends OnReason
OrdersUsers, Inventory, PaymentsNeed customer info, stock check, charge card
Inventory(none)Read-only for orders
PaymentsOrdersCreate invoice after payment
NotificationsOrders, Users, PaymentsSend emails after key events
ReportingAll othersQueries for analytics
User ManagementPaymentsFor customer credit info (weak dependency)

Inventory has zero dependencies. It’s a great first candidate to extract. Notifications is next — it’s heavily depended on, but it doesn’t depend on much.

Step 2: Identify the First Extraction Candidate

Notifications Service is an ideal candidate because:

  • It’s used by many services but doesn’t use them directly (they call it via events)
  • It’s non-critical (if it fails, orders still process; notifications are retried)
  • It has clear boundaries (send email, SMS, push)
  • A single team can own it

Step 3: Apply the Strangler Fig Pattern

graph LR
    A["Monolith"] -->|Old Code| B["Email/SMS/Push"]
    A -->|API Gateway Routes| C["Notifications Service"]
    C -->|Sends| B["Email/SMS/Push"]
    A -->|Publishes Events| D["Event Stream"]
    C -->|Subscribes| D

The monolith publishes events (user signed up, order placed, payment received). The new Notifications Service subscribes. API calls to send notifications are routed to the new service. The monolith still has notification code, but it’s dormant.

Once confident, delete the notification code from the monolith.

Step 4: Extract Payments

graph LR
    A["Monolith\n(Orders, Users, Inventory)"] -->|sync| B["Payments Service"]
    B -->|Publishes: PaymentProcessed| D["Event Stream"]
    A -->|Subscribes: PaymentProcessed| A

The Payments Service is extracted. Orders still call it synchronously (payment is critical), but it publishes events that the monolith consumes for updates.

Step 5: Continue

Extract Inventory, then User Management. Order Processing stays in the monolith for now — it’s the core and it’s complex. Once it’s isolated, you can extract it last.

Trade-Offs in Decomposition

Speed vs. Safety

Big Bang Rewrite: Exciting, but risky. You’re not running code in production. Assumptions you made during design don’t match reality. You’re months away from learning if it works.

Incremental Extraction (Strangler Fig): Slower, but safer. Each extraction is small, testable, and reversible. You learn as you go.

Recommendation: Incremental. You’ll make better decisions with real data.

Network Overhead

Decomposition adds network hops. Service A calls Service B calls Service C. Latency is higher than function calls in a monolith. Use asynchronous communication (events, queues) where possible.

Testing Complexity

Testing a microservice system is harder. You need:

  • Unit tests for each service
  • Integration tests (service A with service B)
  • End-to-end tests (entire flow)
  • Contract tests (ensure service A’s output matches service B’s expectations)

But this is manageable with the right test pyramid.

When to Stop Decomposing

Stop when:

  • Services are owned by one team
  • Deployment is independent
  • You can understand a service without understanding the whole system
  • Network latency isn’t a bottleneck

You probably have 5-15 services. Uber has hundreds (scale), but most systems don’t need that.

Real-World Lessons

Did you know? Netflix uses a “Strangler Fig” approach so consistently that it’s now a pattern in their playbook. They extract a service, run it in parallel, verify it works, then remove the old code. Boring but effective.

Pro tip: The first service you extract will take 3x longer than you estimate. The second will take 2x. By the fifth, you’ll have a process. Don’t lose momentum.

Common mistake: Decomposing too early. If your monolith is still new and your teams are aligned, stay monolithic. Premature decomposition costs in operational complexity. Wait until you feel the pain.

Key Takeaways

  • Decompose by business capability or subdomain, not by technical layers or database tables. Align service boundaries with your organizational structure.
  • The Strangler Fig pattern is your friend. Extract services incrementally from a running monolith. It’s safer and you learn as you go.
  • A service should be understood and owned by one team. If you need multiple teams, it’s too big. Use the two-pizza heuristic.
  • Each service owns its data. Shared databases are the antithesis of microservices. If two services need the same data, one denormalizes a read-only copy.
  • Map dependencies before decomposing. Start with modules that have low coupling and clear boundaries. Inventory and notifications are typical first candidates.
  • Watch for the distributed monolith. If services must deploy together, share databases, or have circular dependencies, you’ve not actually decomposed — you’ve just distributed your problems.

Practice Scenarios

Scenario 1: SaaS Analytics Platform

You have a monolith handling: user management, data ingestion, query execution, visualization rendering, alerting, and billing. You want to decompose.

  • Which service would you extract first, and why?
  • How would you prevent the visualization service from becoming a bottleneck if millions of dashboards are rendered per day?
  • Where would you keep shared data (user profiles), and how?

Scenario 2: Ride-Sharing Service

Current monolith: driver matching, payment processing, rating/reviews, notifications, ride history, surge pricing.

  • Map the dependencies. Which module depends on which?
  • Design a decomposition plan. What’s the first service to extract?
  • Driver matching needs real-time data on driver locations. How would you handle this across service boundaries?

Scenario 3: Multi-Tenant Marketplace

You’re building a marketplace (sellers, buyers, listings, transactions, messaging). Teams are: Seller Platform (2 people), Buyer Experience (3 people), Payments (2 people), Operations (2 people).

  • How would you decompose to align with team ownership?
  • What data does the Buyer Experience team own? What do they need to read from Seller Platform?
  • Where would you draw the line between Payments and Transaction Management?

What’s Next

Decomposing into services is the architecture. But how do you define clear boundaries between them? How do you ensure one service’s data model doesn’t leak into another’s? How do you handle distributed transactions and consistency across service boundaries?

That’s where Domain-Driven Design becomes your toolkit. In the next chapter, we’ll explore service boundaries and DDD in depth — the language and patterns for drawing lines that stick.