Docs / Kubernetes & Orchestration / Kubernetes Resource Quotas and Limit Ranges

Kubernetes Resource Quotas and Limit Ranges

By Admin · Feb 21, 2026 · Updated Apr 23, 2026 · 6 views · 2 min read

Kubernetes Resource Quotas and Limit Ranges is a common requirement for VPS administrators. This guide provides practical instructions that you can follow on Ubuntu 22.04/24.04 or Debian 12, though most steps apply to other distributions as well.

Prerequisites

  • Root or sudo access to the server
  • Basic familiarity with the Linux command line
  • A running Kubernetes cluster (K3s or similar)
  • kubectl installed on your local machine
  • A VPS running Ubuntu 22.04 or later (2GB+ RAM recommended)

Deploying the Application

The default configuration works well for development environments, but production servers require additional tuning. Pay particular attention to connection limits, timeout values, and logging settings.


# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: quotas-app
  labels:
    app: quotas
spec:
  replicas: 2
  selector:
    matchLabels:
      app: quotas
  template:
    metadata:
      labels:
        app: quotas
    spec:
      containers:
      - name: quotas
        image: quotas:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "128Mi"
            cpu: "250m"
          limits:
            memory: "256Mi"
            cpu: "500m"

The output should show the service running without errors. If you see any warning messages, address them before proceeding to the next step.

Security Implications

When scaling this setup, consider vertical scaling (adding more RAM/CPU) first, as it's simpler to implement. Horizontal scaling adds complexity but may be necessary for high-traffic applications.

Configuring Services and Ingress

The default configuration works well for development environments, but production servers require additional tuning. Pay particular attention to connection limits, timeout values, and logging settings.


# Apply the configuration
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

# Verify the deployment
kubectl get pods -l app=quotas
kubectl describe deployment quotas-app
kubectl logs -f deployment/quotas-app

This configuration provides a good balance between performance and resource usage. For high-traffic scenarios, you may need to increase the limits further.

Next Steps

With quotas now set up and running, consider implementing monitoring to track performance metrics over time. Regularly review your configuration as your workload changes and scale resources accordingly.

Was this article helpful?