DaemonSets
DaemonSets ensure that a copy of a pod runs on every node (or a subset of nodes) in your cluster. They are ideal for system-level services that must operate on all nodes, such as logging, monitoring, and network agents.
Key benefits:
- Cover all nodes - One Pod per node
- Scale automatically with nodes - New nodes get pods, removed nodes lose pods
- Run system services - Ideal for logging, monitoring, and networking
- Target specific nodes - Using selectors or affinity
- Access host resources - Like logs, metrics, and system files
When to Use DaemonSets
Daemonsets are perfect for services that need to run on every node or a subset of nodes:
- Log collectors - Fluentd, Filebeat, Fluent Bit
- Monitoring agents - Node Exporter, Datadog agent, New Relic
- Network plugins - CNI plugins, load balancer controllers
- Security agents - Antivirus scanners, compliance tools
- Storage daemons - Distributed storage agents
Deploying a DaemonSet
Let's create a simple log collector DaemonSet that runs on all nodes and collects logs from the host filesystem:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: log-collector
namespace: kube-system
labels:
app.kubernetes.io/name: log-collector
app.kubernetes.io/created-by: eks-workshop
spec:
selector:
matchLabels:
app: log-collector
template:
metadata:
labels:
app: log-collector
spec:
containers:
- name: fluentd
image: public.ecr.aws/aws-observability/aws-for-fluent-bit:stable
volumeMounts:
- name: varlog
mountPath: /var/log
readOnly: true
- name: containers
mountPath: /var/lib/docker/containers
readOnly: true
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
volumes:
- name: varlog
hostPath:
path: /var/log
- name: containers
hostPath:
path: /var/lib/docker/containers
kind: DaemonSet: Creates a DaemonSet controller
metadata.name: Name of the DaemonSet (log-collector)
spec.selector: How DaemonSet finds its pods (by labels)
spec.template.spec.containers.0.volumeMounts: How container accesses node files
spec.template.spec.volumes: Host paths for accessing node logs
Key DaemonSet characteristics:
- No
replicasfield - Kubernetes automatically runs one pod per node - Pods automatically scale as nodes are added or removed.
hostPathvolumes allow Pods to access node files, if required.- Typically deployed in
kube-systemnamespace for system services, but can run in other namespaces.
Deploy the DaemonSet:
Inspecting Your DaemonSet
Check DaemonSet status:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE AGE
log-collector 3 3 3 3 3 2m
You'll see output showing desired vs current pods:
View the pods across all nodes:
NAME READY STATUS NODE AGE
log-collector-abc12 1/1 Running ip-10-42-1-1 2m
log-collector-def34 1/1 Running ip-10-42-2-1 2m
log-collector-ghi56 1/1 Running ip-10-42-3-1 2m
Notice one pod per node
Node Selection
Target specific nodes using nodeSelector:
spec:
template:
spec:
nodeSelector:
node-type: worker
containers:
- name: monitoring-agent
image: monitoring:latest
Or use nodeAffinity for more complex rules:
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
Use nodeSelector for simple label matches and nodeAffinity for more complex scheduling requirements.
DaemonSets vs Other Controllers
| Controller | Purpose | Replica Count | Node Placement | Use Case |
|---|---|---|---|---|
| DaemonSet | One Pod per node | Automatic | All nodes or subset | System services |
| Deployment | Multiple interchangeable Pods | Configurable | Any node | Stateless apps |
| StatefulSet | Pods with stable identity | Configurable | Any node | Stateful apps |
DaemonSets are ideal for services that must run on every node or a specific set of nodes.
Key Points to Remember
- DaemonSets automatically run one pod per node
- Perfect for system-level services like logging and monitoring
- No need to specify replica count - it's automatic
- Can access node resources through hostPath volumes
- Use node selectors to target specific nodes
- Pods are automatically added/removed as nodes join/leave
- Ideal for consistent system functionality across all nodes