Docs / Kubernetes & Orchestration / How to Perform Rolling Updates in Kubernetes

How to Perform Rolling Updates in Kubernetes

By Admin · Mar 2, 2026 · Updated Apr 24, 2026 · 25 views · 3 min read

How to Perform Rolling Updates in Kubernetes

Rolling updates let you update your application to a new version with zero downtime. Kubernetes gradually replaces old pods with new ones, ensuring that a minimum number of pods remain available throughout the process. This guide covers configuring and managing rolling updates on your Breeze cluster.

How Rolling Updates Work

When you update a Deployment (for example, changing the container image), Kubernetes creates a new ReplicaSet and incrementally scales it up while scaling down the old ReplicaSet. At every step, the total number of available pods stays above a configured threshold.

Configuring the Update Strategy

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 6
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 2
      maxUnavailable: 1
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
      - name: web-app
        image: myapp:v1.0
        ports:
        - containerPort: 8080
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 10

Key Parameters

  • maxSurge — how many extra pods above the desired count can exist during the update (absolute number or percentage)
  • maxUnavailable — how many pods can be unavailable during the update (absolute number or percentage)

With 6 replicas, maxSurge: 2 and maxUnavailable: 1, Kubernetes can have up to 8 pods total and as few as 5 available during the rollout.

Triggering an Update

# Update the container image
kubectl set image deployment/web-app web-app=myapp:v2.0

# Or edit the deployment directly
kubectl edit deployment web-app

# Or apply an updated manifest
kubectl apply -f deployment.yaml

Monitoring the Rollout

# Watch the rollout progress
kubectl rollout status deployment/web-app

# View rollout history
kubectl rollout history deployment/web-app

# See details of a specific revision
kubectl rollout history deployment/web-app --revision=3

Rolling Back

If the new version has issues, roll back immediately:

# Rollback to the previous revision
kubectl rollout undo deployment/web-app

# Rollback to a specific revision
kubectl rollout undo deployment/web-app --to-revision=2

Readiness Probes Are Critical

Without readiness probes, Kubernetes considers a pod ready as soon as the container starts, potentially routing traffic before the application is initialized. Always configure readiness probes so the rolling update only proceeds when new pods are genuinely serving traffic.

Pausing and Resuming

For canary-style deployments, pause the rollout after updating a few pods:

kubectl rollout pause deployment/web-app
# Test the new pods, check metrics, verify on Breeze dashboard
kubectl rollout resume deployment/web-app

Best Practices

  • Always use readiness probes to ensure safe rollouts on your Breeze cluster
  • Set maxUnavailable: 0 if you need true zero-downtime updates
  • Use minReadySeconds to wait before marking a new pod as available
  • Record changes with kubectl apply --record or annotations for meaningful rollout history
  • Test rollbacks regularly so you are prepared when production issues arise

Was this article helpful?