Docs / Kubernetes & Orchestration / K3s Multi-Node Cluster with Embedded etcd

K3s Multi-Node Cluster with Embedded etcd

By Admin · Mar 15, 2026 · Updated Apr 25, 2026 · 6 views · 3 min read

Managing k3s effectively is a crucial skill for any system administrator. This tutorial provides step-by-step instructions for multi-node configuration, along with best practices for production environments.

Prerequisites

  • Basic familiarity with the Linux command line
  • Root or sudo access to the server
  • A registered domain name (for public-facing services)
  • A VPS running Ubuntu 22.04 or later (2GB+ RAM recommended)

Deploying the Application

When scaling this setup, consider vertical scaling (adding more RAM/CPU) first, as it's simpler to implement. Horizontal scaling adds complexity but may be necessary for high-traffic applications.


# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k3s-app
  labels:
    app: k3s
spec:
  replicas: 2
  selector:
    matchLabels:
      app: k3s
  template:
    metadata:
      labels:
        app: k3s
    spec:
      containers:
      - name: k3s
        image: k3s:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "128Mi"
            cpu: "250m"
          limits:
            memory: "256Mi"
            cpu: "500m"

Each line in the configuration serves a specific purpose. The comments explain the reasoning behind each setting, making it easier to customize for your specific use case.

Configuring Services and Ingress

The k3s configuration requires careful attention to resource limits and security settings. On a VPS with limited resources, it's important to tune these parameters according to your available RAM and CPU cores.


# Apply the configuration
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

# Verify the deployment
kubectl get pods -l app=k3s
kubectl describe deployment k3s-app
kubectl logs -f deployment/k3s-app

Each line in the configuration serves a specific purpose. The comments explain the reasoning behind each setting, making it easier to customize for your specific use case.

Setting Up Persistent Storage

After applying these changes, monitor the server's resource usage for at least 24 hours to ensure stability. Tools like htop, iostat, and vmstat can provide real-time insights into system performance.


# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: k3s-service
spec:
  selector:
    app: k3s
  ports:
  - port: 80
    targetPort: 8080
  type: ClusterIP

The configuration above sets the recommended values for a VPS with 2-4GB of RAM. Adjust the memory-related settings proportionally if your server has different specifications.

  • Test your backup restore procedure monthly
  • Monitor disk space usage and set up alerts
  • Review log files weekly for anomalies
  • Enable automatic security updates for critical patches
  • Keep your system packages updated regularly

Common Issues and Solutions

  • Connection timeout: Verify your firewall rules allow traffic on the required ports. Use ss -tlnp to confirm the service is listening on the expected port.
  • Service won't start: Check the logs with journalctl -xe -u k3s. Common causes include port conflicts, missing configuration files, or insufficient permissions.

Wrapping Up

Following this guide, your k3s setup should be production-ready. Keep an eye on resource usage as your traffic grows and don't forget to test your backup and recovery procedures periodically.

Was this article helpful?