What is a Pod?
The smallest thing Kubernetes knows about. Everything starts here.
Before you can understand Kubernetes, you need to understand Pods. Not because they are complicated they are not but because they are the atomic unit of everything. When you ask Kubernetes to run something, it always runs it inside a Pod.
Think of Kubernetes as a city. Containers are people. And Pods are apartments. A person does not just wander the city they live somewhere. That somewhere gives them an address, shared walls, shared utilities. Containers in Kubernetes work the same way: they live inside Pods, and the Pod gives them a shared network and shared storage.
Runs one process.
Gives containers a home.
Here is the key insight that trips people up early: you almost never create containers directly in Kubernetes. You create Pods. The Pod creates the containers. This indirection is intentional it lets Kubernetes manage scheduling, networking, and restarts at the Pod level without caring what is inside.
A Pod has exactly one IP address, shared by all containers inside it. If you have two containers in the same Pod, they can talk to each other using localhost no service, no DNS, just localhost. This is the most important thing to remember about Pod networking.
Pods are also ephemeral. This is a word Kubernetes people love, and it just means temporary. Pods are designed to die and be replaced. They are not pets. They are cattle. When a Pod dies, Kubernetes creates a new one with a new IP, a new name, a fresh start. This is why you never store important data inside a Pod without a proper Volume. More on that in the Storage chapter.
Inside a Pod
What actually lives inside that little box, and why it matters.
A Pod is not magic. It is a spec a description of what should run. Kubernetes reads that spec, picks a node, and makes it happen. But what exactly is inside that spec?
Every Pod has three things worth understanding: containers, volumes, and metadata. Containers are the processes you want to run. Volumes are shared storage that containers in the same Pod can access. Metadata is how you find the Pod later its name, namespace, and labels.
Let's look at what Kubernetes actually gives a Pod when it starts:
- A unique IP address assigned from the cluster's pod CIDR range. This IP is only reachable from inside the cluster.
- A hostname by default, the Pod's name. Containers inside can resolve each other by using localhost.
- Environment variables injected at startup, often from ConfigMaps or Secrets.
- Mounted volumes directories that can be shared between containers or persisted beyond the Pod's lifetime.
- Resource limits how much CPU and memory the container is allowed to use.
Here is something most tutorials skip: every Pod actually runs a hidden container called the pause container. You will never see it in your YAML, but it is always there. Its only job is to hold the network namespace open. All the other containers in the Pod join that network namespace, which is why they all share the same IP and can use localhost.
If you SSH into a node and run crictl ps, you will see one pause container per Pod. Do not touch it. It is Kubernetes infrastructure.
One more thing that trips people up: containers inside a Pod do not share their filesystems by default. Each container has its own isolated filesystem from its image. The only way two containers share files is through a shared Volume that both mount. This is intentional it keeps containers isolated while still allowing cooperation through explicit mounts.
Pod Lifecycle
Pods are born, they run, they die. Understanding this cycle saves you hours of debugging.
Every Pod you create goes through a predictable journey. Knowing this journey means you can look at a Pod's status and instantly understand what is happening and what to do about it.
placed on a node yet
one container active
The five phases you will see in kubectl get pods:
- Pending Kubernetes accepted the Pod but has not placed it on a node yet. Usually because no node has enough resources, or the image is being pulled.
- Running The Pod is on a node and at least one container is running. This does not mean your app is healthy just that the container process started.
- Succeeded All containers exited with code 0. Common for batch jobs.
- Failed At least one container exited with a non-zero code and won't restart.
- Unknown Kubernetes lost contact with the node. Usually a node networking problem.
The Pod phase is a summary. What you really want to look at is the container state. A Pod can be Running while a container inside it is Waiting or Terminated. This is one of the most common sources of confusion.
Container states:
- Waiting The container has not started yet. Check the reason: ContainerCreating, ImagePullBackOff, CrashLoopBackOff.
- Running The container process is alive.
- Terminated The container exited. Check the exit code. 0 means success. 137 means OOMKilled (out of memory). 1 means application error.
Restart policies control what happens when a container exits:
- Always (default) Kubernetes always restarts it. Good for long-running services.
- OnFailure Only restarts on non-zero exit. Good for batch jobs that should retry on error.
- Never No restarts. Good for one-shot jobs.
1. kubectl get pods see the phase and restart count
2. kubectl describe pod <name> read the Events section at the bottom
3. kubectl logs <name> --previous see why it crashed
4. kubectl get events --sort-by=.metadata.creationTimestamp cluster-wide view
Writing Pod YAML
Your first real YAML. Every field explained like it matters because it does.
YAML is how you talk to Kubernetes. Every resource Pods, Deployments, Services is described in YAML. Most beginners either avoid it (using only imperative kubectl commands) or copy-paste it without understanding it. Both approaches will eventually break you on the exam and in production.
Let's build a Pod manifest from scratch, line by line.
A few things worth understanding deeply here:
- labels are how everything finds everything in Kubernetes. Services select Pods by label. Deployments manage Pods by label. Get comfortable writing them and querying them.
- containerPort is purely documentation. It does not open a firewall rule or expose anything. The container's process itself opens the port this field just helps humans understand what port the app uses.
- requests vs limits requests are what the scheduler uses to decide which node has room. Limits are the hard ceiling. A container that exceeds its memory limit gets OOMKilled. One that exceeds CPU is throttled (slowed down, not killed).
Now apply it and check it:
Multi-Container Pods
When one container is not enough. The patterns that show up on every CKA and in every production cluster.
Most Pods run one container. But Kubernetes supports multiple containers per Pod, and there are three patterns every engineer should know by name. These patterns appear on the CKA exam and in real production clusters everywhere.
Before we go through the patterns, there is one question that comes up constantly in the CKA community and genuinely confuses people so let's clear it up first.
In Kubernetes 1.29+, the official docs introduced a new pattern: you can now declare a sidecar as an initContainer with restartPolicy: Always. This confused a lot of people including engineers actively studying for the CKA.
Here is the mental model that makes it click:
Classic init containers run to completion before the main container starts. They do one job and exit. Think: "download this config file, then exit 0."
Sidecar containers (the new native kind) run alongside the main container for the entire Pod lifetime. They never exit on their own. Think: "keep shipping these logs forever."
The fact that sidecars are technically declared under spec.initContainers with restartPolicy: Always is an implementation detail. On the CKA exam, if the spec has a container running an infinite loop (while true; do ...; done), that is a sidecar. If it runs to completion, that is a classic init container.
A sidecar runs alongside the main container for the full lifetime of the Pod. It enhances the main container without modifying it. The main container does its job the sidecar adds a capability on top.
Real examples: a log shipper tailing the app's log file and forwarding to Elasticsearch. A certificate rotator watching for expiring TLS certs. An Envoy proxy handling all outbound traffic (this is exactly how Istio works).
An init container runs before the main container starts and must succeed before the main container is allowed to begin. It runs once, completes, and exits. This is perfect for setup tasks: waiting for a database, running migrations, downloading config.
Init containers are defined under spec.initContainers. They run in order, one at a time. If one fails, Kubernetes restarts the Pod.
Inline (single string): command: ["sh", "-c", "while true; do date; sleep 5; done"]
Array form (recommended):
command:
- sh
- -c
- "while true; do date; sleep 5; done"
Both work. The array form is safer because YAML quoting rules can bite you with inline strings containing special characters. On the exam, the array form is less likely to fail due to indentation or quoting errors.
On the CKA, when you need to create a Pod with a complex multi-container spec, do not write the YAML from scratch. Use this workflow it saves 3 to 4 minutes and avoids syntax errors:
If the command is an infinite loop (while true, tail -f, a server process) it is a sidecar. Declare it under initContainers with restartPolicy: Always.
If the command runs once and exits (nc -z to check a port, wget to download a file, a database migration script) it is a classic init container. Declare it under initContainers without restartPolicy.
The fact that a container with an infinite loop could technically reach a natural completion point is irrelevant. If the spec intends it to run forever, treat it as a sidecar.
An ambassador container acts as a proxy between the main container and the outside world. The main container only ever talks to localhost. The ambassador handles routing, retries, auth, and TLS to the real destination. This way you add complexity without touching the application code at all.
kubectl get pod <name> -o yaml > pod.yaml
Edit the file, then:
kubectl replace --force -f pod.yaml
The --force flag deletes and recreates the Pod. It will have a new IP and new start time, but the spec will be correct. This is the accepted exam technique.