Build data pipelines and batch processing workflows with Argo Workflows on Kubernetes. This guide provides step-by-step instructions for setting up and managing this technology on your VPS-based Kubernetes infrastructure.
Overview
This technology addresses a critical aspect of Kubernetes operations. Understanding and properly configuring it is essential for running production-grade clusters on VPS infrastructure.
Installation
# Install using Helm (most common method)
helm repo add argo-workflows-data-pipelines https://charts.example.com
helm repo update
helm install argo-workflows-data-pipelines argo-workflows-data-pipelines/argo-workflows-data-pipelines \
--namespace argo-workflows-data-pipelines-system \
--create-namespace \
--values values.yaml
# Verify installation
kubectl get pods -n argo-workflows-data-pipelines-system
kubectl get crd | grep argo-workflows-data-pipelines
Configuration
# values.yaml - Production configuration
replicaCount: 3
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "200m"
memory: "256Mi"
persistence:
enabled: true
storageClass: local-path
size: 10Gi
monitoring:
enabled: true
serviceMonitor:
enabled: true
interval: 30s
security:
podSecurityContext:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
Basic Usage
# Create a basic resource
cat