Container Orchestration Platforms
Containers fundamentally changed how we package applications—you build it once, and it runs consistently everywhere. But packaging is only half the battle. The real challenge emerges when you’re running dozens, hundreds, or thousands of containers across multiple machines in production. That’s where orchestration platforms come in. They handle scheduling, resource allocation, networking, storage, updates, and recovery. This reference covers the key platforms and helps you choose the right one for your team and workload.
Docker: The Foundation
Docker isn’t an orchestrator itself, but no discussion of container orchestration is complete without it. Docker is the container runtime and tooling that everything else depends on.
Core components:
- Docker Engine: The container runtime that executes containers
- Dockerfile: Defines how to build an image layer by layer
- Docker Compose: For running multiple containers locally with networking and volumes
- Docker Hub: Public registry for sharing images; also supports private registries
When you need Docker: You always need Docker (or an equivalent OCI-compliant runtime like containerd or CRI-O). The question is what orchestrator runs on top of it.
Pro tip: Use Docker Compose for local development and integration testing—it’s fast to iterate and mirrors your production container setup without the orchestration overhead.
Kubernetes: The Industry Standard
Kubernetes (K8s) is the de facto standard for container orchestration in production environments. It’s powerful, feature-rich, and has become the platform of choice for organizations running containers at any real scale.
Key concepts you need to know:
- Pods: The smallest deployable unit. One or more containers (usually one) sharing networking namespace. All containers in a pod communicate via localhost.
- Deployments: Declarative configuration for running pods. Supports rolling updates, rollbacks, and replica management.
- Services: Provide stable networking and DNS for pods. Three types:
- ClusterIP: Internal service, accessible only within the cluster
- NodePort: Exposes a port on every node
- LoadBalancer: Integrates with cloud provider load balancers
- Ingress: HTTP/HTTPS routing layer. Route traffic to services based on hostname and path.
- ConfigMaps and Secrets: Store configuration and sensitive data separately from pod definitions.
- Namespaces: Logical cluster partitioning. Useful for multi-tenant setups or environment isolation.
- PersistentVolumes (PV) and PersistentVolumeClaims (PVC): Manage storage independent of pod lifecycle.
Scaling and reliability:
- HPA (Horizontal Pod Autoscaler): Automatically scale the number of pod replicas based on CPU or custom metrics
- VPA (Vertical Pod Autoscaler): Automatically adjust CPU and memory requests/limits for pods
- Cluster Autoscaler: Add or remove nodes from the cluster based on resource demands
- Health checks: Define liveness probes (is the pod alive?), readiness probes (is it ready for traffic?), and startup probes (has initialization completed?)
Kubernetes ecosystem:
- Helm: Package manager for Kubernetes. Charts bundle manifests, templates, and defaults for common applications (databases, monitoring, ingress controllers).
- Service Mesh (Istio, Linkerd): Adds sophisticated traffic management, security policies, and observability. Overkill for simple deployments, essential for complex multi-service systems.
- Operators: Extend Kubernetes with custom controllers for stateful applications (databases, brokers). Examples: Prometheus Operator, MySQL Operator.
When is Kubernetes overkill?
Kubernetes has a steep learning curve and operational overhead. Consider alternatives if:
- Your team is fewer than 5 engineers
- You’re running only 1-3 services
- You don’t need multi-region or high-availability failover
- You have minimal DevOps expertise
Pro tip: Start with managed Kubernetes (EKS, GKE, AKS) rather than self-hosted. The control plane operational burden is substantial, and managed services integrate better with cloud infrastructure.
Managed Kubernetes Services
All major cloud providers offer fully managed Kubernetes:
- EKS (AWS): Integrates with EC2, IAM, VPC, Elastic Load Balancing. Wide service ecosystem. Integrates with many AWS tools but can feel clunky.
- GKE (Google Cloud): Widely considered the most mature managed Kubernetes offering. Excellent auto-scaling, automatic upgrades, and deep integration with Google’s observability tools. Smaller ecosystem compared to AWS.
- AKS (Azure): Strong integration with Azure services and enterprise tooling (Active Directory, Azure DevOps). Good for organizations already invested in Microsoft stack.
Managed Kubernetes pros:
- You don’t manage the control plane or master nodes
- Automatic security patches and upgrades
- Integrated with cloud IAM, networking, and storage
- Built-in monitoring and logging
Managed Kubernetes cons:
- Vendor lock-in (though Kubernetes portability is better than most)
- Per-node and control plane costs
- Less control over cluster configuration
Amazon ECS and Fargate
AWS offers its own container orchestration platform (predates Kubernetes) with two launch types:
ECS with EC2 launch type:
- You manage EC2 instances; ECS schedules containers on them
- You handle patching, security groups, monitoring
- More control but more operational responsibility
Fargate (serverless):
- AWS manages the underlying compute—no instances to manage
- You define CPU/memory, and Fargate handles the rest
- Simpler but more expensive per container than EC2
- No SSH access to infrastructure (good for security, bad for debugging)
When to use ECS/Fargate:
- Team is already deep in AWS ecosystem
- Need simple setup without Kubernetes complexity
- Running fewer than 20 services (Kubernetes starts to shine at larger scales)
- Acceptable with AWS-only deployment (not multi-cloud)
Docker Swarm
Docker Swarm is built into Docker and provides basic orchestration. It’s simple to set up (docker swarm init, docker service create) but lacks Kubernetes features like resource limits, sophisticated networking, and stateful workload support. Declining in popularity since Kubernetes won the orchestration wars. Use it only if you have a specific reason to stay within Docker’s ecosystem and genuinely minimal scale.
Nomad (HashiCorp)
HashiCorp’s Nomad is unique: it orchestrates not just containers but any workload—Docker containers, VMs, Java apps, and batch jobs. This flexibility appeals to organizations with diverse workload types.
Key characteristics:
- Simpler architecture than Kubernetes (no resource requests/limits, simpler networking)
- Integrates with Consul (service discovery and networking) and Vault (secrets management)
- Good for mixed infrastructure environments
- Smaller community than Kubernetes; fewer third-party integrations
When to use Nomad:
- You need to orchestrate heterogeneous workloads (containers plus VMs plus batch jobs)
- You’re already using Consul and Vault
- You want simpler orchestration but broader workload support
Comparison Matrix
| Platform | Complexity | Scaling | Multi-Cloud | Managed Option | Best For |
|---|---|---|---|---|---|
| Docker | Low | Single machine | No | No | Local dev, single-host deployments |
| Kubernetes | High | Excellent | Excellent | Yes (EKS, GKE, AKS) | Production, complex systems, multi-region |
| ECS (EC2) | Medium | Good | AWS only | No | AWS-native teams, medium scale |
| Fargate | Medium | Good | AWS only | Yes (AWS managed) | AWS, serverless preference, simple workloads |
| Docker Swarm | Low | Limited | No | No | Very small deployments only |
| Nomad | Medium | Good | Excellent | No (but cloud-agnostic) | Mixed workloads, heterogeneous infrastructure |
Decision Framework
Choose Kubernetes if:
- Running 10 or more microservices
- Need multi-region or disaster recovery
- Team has Kubernetes expertise or time to learn
- Production scale with high availability requirements
Choose managed Kubernetes if:
- You meet the above criteria but want to minimize operational burden
- You’re already on AWS, Google Cloud, or Azure
- Don’t want to manage the control plane
Choose ECS/Fargate if:
- Team is AWS-focused
- Running fewer than 20 services
- Want simplicity over flexibility
- Acceptable with vendor lock-in
Choose Nomad if:
- You have mixed workload types (containers, VMs, batch jobs)
- Want simpler orchestration with cross-cloud flexibility
- Already committed to HashiCorp tooling
Choose Docker Swarm only if:
- You have a genuinely small, simple deployment
- You need to stay within Docker’s ecosystem for compliance reasons
Getting Started
Kubernetes quickstart:
# Local development
minikube start # or kind create cluster
kubectl apply -f deployment.yaml
kubectl get pods
kubectl logs pod-name
# Cloud managed
aws eks create-cluster --name my-cluster
gcloud container clusters create my-cluster
az aks create --resource-group mygroup --name my-cluster
Docker Compose for comparison (local):
version: '3.8'
services:
app:
image: myapp:latest
ports:
- "8080:8080"
environment:
DATABASE_URL: postgres://db:5432
depends_on:
- db
db:
image: postgres:14
environment:
POSTGRES_PASSWORD: secret
Kubernetes equivalent (Deployment + Service):
apiVersion: apps/v1
kind: Deployment
metadata:
name: app
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: myapp:latest
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
value: "postgres://db-service:5432"
---
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
Key Takeaways
- Docker is your container runtime—always present, never the orchestrator itself
- Kubernetes is the standard for production systems at meaningful scale; start with a managed service
- ECS/Fargate are solid if you’re AWS-locked and want simplicity
- Nomad is your choice if you need to orchestrate diverse workload types
- Pick based on scale, team expertise, and cloud strategy, not hype
The right choice depends on your current reality, not aspirational architecture. A small team with five services on AWS shouldn’t feel pressure to run Kubernetes. But if you’re building a platform serving multiple teams, Kubernetes becomes a worthwhile investment.
See Appendix B for deeper resources on each platform.