Interactive Lab ยท CKA ยท Intermediate

TAINTS, TOLERATIONS & AFFINITY

$ kubectl taint node worker-1 gpu=true:NoSchedule
🚫 Taints Repel pods from nodes
  • 1A taint marks a node to repel pods that do not explicitly tolerate it. Three effects: NoSchedule (never schedule), PreferNoSchedule (avoid if possible), NoExecute (evict existing pods too).
  • 2Control-plane nodes are tainted with node-role.kubernetes.io/control-plane:NoSchedule by default this is why workloads do not run there.
  • 3Remove a taint by appending - to the end of the taint string.
Taint commands
# Add a taint kubectl taint node worker-1 gpu=true:NoSchedule # Remove a taint (note the trailing -) kubectl taint node worker-1 gpu=true:NoSchedule- # View all taints kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints
✅ Tolerations Accept a taint
  • 1A toleration allows a pod to be scheduled on a tainted node. It says: I accept this constraint.
  • 2The operator can be Equal (exact key=value match) or Exists (key exists with any value).
  • 3Tolerating a taint does NOT force the pod onto that node. It only unlocks the possibility. Use affinity to force placement.
Toleration in Pod spec
spec: tolerations: - key: "gpu" operator: "Equal" value: "true" effect: "NoSchedule" # Tolerate any taint with key "dedicated": - key: "dedicated" operator: "Exists"
📍 Node Affinity Target specific nodes
  • 1requiredDuringSchedulingIgnoredDuringExecution is a hard rule. Pod will NOT be scheduled if no node matches.
  • 2preferredDuringSchedulingIgnoredDuringExecution is a soft rule. Scheduler prefers matching nodes but places anywhere if needed.
  • 3Operators: In, NotIn, Exists, DoesNotExist, Gt, Lt.
Node affinity
spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - eu-west-1a - eu-west-1b
🤝 Pod Anti-Affinity Spread replicas
  • 1Pod anti-affinity keeps a pod away from other pods matching a label selector. Use it to spread replicas across nodes or AZs for high availability.
  • 2topologyKey: kubernetes.io/hostname means different node. topologyKey: topology.kubernetes.io/zone means different AZ.
  • 3Combine with requiredDuringScheduling to enforce spreading, or preferredDuringScheduling to prefer it.
Anti-affinity spread replicas across nodes
spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - my-app topologyKey: "kubernetes.io/hostname" # No two my-app pods can share the same node
Taint Effects
NoScheduleNew pods without toleration are not scheduled here
PreferNoScheduleSoft version, avoid if possible
NoExecuteEvicts existing pods that do not tolerate it
kubectl taint node ... -Remove taint by appending -
Affinity Types
nodeAffinityTarget specific nodes by node labels
podAffinityCo-locate with matching pods
podAntiAffinityStay away from matching pods
required vs preferredHard constraint vs soft preference
Done