Why Orchestration?
Running a few containers manually is manageable. Running dozens or hundreds across multiple servers requires automated orchestration — scheduling, scaling, networking, and self-healing.
Key Concepts
- Scheduling — deciding which node runs which container
- Scaling — adding/removing container instances based on demand
- Service discovery — containers finding each other by name
- Load balancing — distributing traffic across container instances
- Self-healing — automatically restarting failed containers
- Rolling updates — deploying new versions without downtime
Docker Swarm (Simple)
# Initialize swarm
docker swarm init
# Create a service with 3 replicas
docker service create --name web --replicas 3 -p 80:80 nginx
# Scale up
docker service scale web=5
# Rolling update
docker service update --image nginx:latest webKubernetes (Enterprise)
# Deploy with kubectl
kubectl create deployment web --image=nginx --replicas=3
kubectl expose deployment web --port=80 --type=LoadBalancer
kubectl scale deployment web --replicas=5When to Use What
- Docker Compose — single server, development, small production
- Docker Swarm — simple multi-node, built into Docker
- Kubernetes — complex multi-node, enterprise features, steep learning curve