Chapter 3: Understanding and Managing Pods

What Are Pods?

A Pod is the smallest deployable unit in Kubernetes. It represents a single instance of a running process in your cluster. Each Pod can contain:

  • A single container (common scenario).
  • Multiple tightly coupled containers that share the same network and storage resources.

Pods are designed to:

  • Run ephemeral (short-lived) processes.
  • Be rescheduled in case of failures.

Key Features of Pods

  1. Shared Network: All containers in a Pod share the same IP address and port space.
  2. Shared Storage: Containers in a Pod can share mounted volumes for persistent data.
  3. Lifecycle Management: Kubernetes manages Pods by scaling them up/down, restarting, or replacing them when needed.
  4. Ephemeral Nature: Pods aren’t permanent; if a Pod dies, it’s recreated with a new identity.

Pod Architecture Overview

Here’s what a typical Pod architecture looks like:

  • Pod: The top-level Kubernetes resource.
  • Container(s): Individual containers inside the Pod.
  • Shared Resources: Includes the network namespace, IPC, and storage volumes.

Creating Your First Pod

We’ll create a simple Pod running an Nginx container.

Step 1: Write a YAML File

Save the following as nginx-pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80

Step 2: Create the Pod

Run the following command to create the Pod:

kubectl apply -f nginx-pod.yaml

Step 3: Verify Pod Status

kubectl get pods

Example Output:

NAME         READY   STATUS    RESTARTS   AGE
nginx-pod    1/1     Running   0          1m

Step 4: Inspect the Pod

To get detailed information about the Pod:

kubectl describe pod nginx-pod

Step 5: Access the Pod

1. Forward a local port to the Pod:

    kubectl port-forward nginx-pod 8080:80

    2. Open http://localhost:8080 in your browser to see the Nginx welcome page.

    Advanced Pod Scenarios

    1. Multi-Container Pod

    Sometimes, you need multiple containers in a Pod to work together (e.g., a web server with a sidecar logging service).

    YAML File Example

    Save this as multi-container-pod.yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: multi-container-pod
    spec:
      containers:
      - name: web
        image: nginx
        ports:
        - containerPort: 80
      - name: logger
        image: busybox
        command: ["sh", "-c", "while true; do echo 'Log data'; sleep 5; done"]
    

    Create and Test

    1. Create the Pod:

    kubectl apply -f multi-container-pod.yaml

    2. Check logs for the logger container:

    kubectl logs multi-container-pod -c logger

    Pod Lifecycle and Health Checks

    1. Liveness Probe

    Ensures the application inside the Pod is running. If the probe fails, Kubernetes restarts the container.

    Example Liveness Probe YAML

    Save this as liveness-pod.yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: liveness-pod
    spec:
      containers:
      - name: nginx
        image: nginx
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 5
    

    2. Readiness Probe

    Determines if the Pod is ready to serve traffic.

    Example Readiness Probe YAML

    Save this as readiness-pod.yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: readiness-pod
    spec:
      containers:
      - name: nginx
        image: nginx
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
    

    Scaling Pods for Production

    Manual Scaling

    Use a Deployment to manage Pods for scaling.

    1. Create a Deployment YAML file (nginx-deployment.yaml):

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx
            ports:
            - containerPort: 80
    

    2. Apply the Deployment:

    kubectl apply -f nginx-deployment.yaml

    3. Verify the Pods:

    kubectl get pods

    4. Scale the Deployment:

    kubectl scale deployment nginx-deployment --replicas=5

    5. Verify the scaled Pods:

    kubectl get pods

    Monitoring Pods

    Basic Monitoring with kubectl

    1. Check Pod resource usage:

      kubectl top pod

      2. Check events for a Pod:

      kubectl describe pod <pod-name>

      Advanced Monitoring with Metrics Server

      1. Install Metrics Server:

      kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

      2. Monitor resources:

      kubectl top pods
      kubectl top nodes

      Troubleshooting Pods

      Scenario 1: Pod CrashLoopBackOff

      1. Check Pod status:

      kubectl get pods

      2. Describe the Pod:

      kubectl describe pod <pod-name>

      3. Check logs:

      kubectl logs <pod-name>

      Scenario 2: Container Image Pull Issues

      1. Check events:

      kubectl describe pod <pod-name>

      2. Verify the image tag or pull policy:

      imagePullPolicy: Always

      Conclusion

      By completing this chapter, you’ve learned:

      1. What Pods are and their importance in Kubernetes.
      2. How to create, manage, and troubleshoot Pods.
      3. How to implement advanced features like multi-container Pods, health checks, and scaling.

      Leave a Reply