Docs / Kubernetes & Orchestration / Kubernetes Service Mesh with Linkerd

Kubernetes Service Mesh with Linkerd

By Admin · Jan 17, 2026 · Updated Apr 25, 2026 · 6 views · 3 min read

Managing linkerd effectively is a crucial skill for any system administrator. This tutorial provides step-by-step instructions for service-mesh configuration, along with best practices for production environments.

Prerequisites

  • Basic familiarity with the Linux command line
  • kubectl installed on your local machine
  • A registered domain name (for public-facing services)

Deploying the Application

It's recommended to test this configuration in a staging environment before deploying to production. This helps identify potential compatibility issues and allows you to benchmark performance differences.


# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: linkerd-app
  labels:
    app: linkerd
spec:
  replicas: 2
  selector:
    matchLabels:
      app: linkerd
  template:
    metadata:
      labels:
        app: linkerd
    spec:
      containers:
      - name: linkerd
        image: linkerd:latest
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "128Mi"
            cpu: "250m"
          limits:
            memory: "256Mi"
            cpu: "500m"

These commands should be run as root or with sudo privileges. If you're using a non-root user, prefix each command with sudo.

Configuring Services and Ingress

The default configuration works well for development environments, but production servers require additional tuning. Pay particular attention to connection limits, timeout values, and logging settings.


# Apply the configuration
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

# Verify the deployment
kubectl get pods -l app=linkerd
kubectl describe deployment linkerd-app
kubectl logs -f deployment/linkerd-app

Each line in the configuration serves a specific purpose. The comments explain the reasoning behind each setting, making it easier to customize for your specific use case.

Setting Up Persistent Storage

The default configuration works well for development environments, but production servers require additional tuning. Pay particular attention to connection limits, timeout values, and logging settings.


# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: linkerd-service
spec:
  selector:
    app: linkerd
  ports:
  - port: 80
    targetPort: 8080
  type: ClusterIP

The output should show the service running without errors. If you see any warning messages, address them before proceeding to the next step.

Security Implications

It's recommended to test this configuration in a staging environment before deploying to production. This helps identify potential compatibility issues and allows you to benchmark performance differences.

Common Issues and Solutions

  • High memory usage: Review the configuration for memory-related settings. Reduce worker counts or buffer sizes if running on a low-RAM VPS.
  • Connection timeout: Verify your firewall rules allow traffic on the required ports. Use ss -tlnp to confirm the service is listening on the expected port.
  • Service won't start: Check the logs with journalctl -xe -u linkerd. Common causes include port conflicts, missing configuration files, or insufficient permissions.

Wrapping Up

Following this guide, your linkerd setup should be production-ready. Keep an eye on resource usage as your traffic grows and don't forget to test your backup and recovery procedures periodically.

Was this article helpful?