Chapter 5: Kubernetes Services – Connecting Pods to the World

What Are Kubernetes Services?

A Service in Kubernetes is an abstraction that exposes your Pods to other Pods, external clients, or both. Services provide a stable networking interface despite the ephemeral nature of Pods.

Why Do We Need Services?

  1. Dynamic Pod Management: Pods come and go, often with different IPs. Services ensure stable endpoints.
  2. Load Balancing: Distributes traffic across multiple Pods.
  3. Discovery: Enables Pods and external systems to find each other.
  4. High Availability: Ensures traffic is routed to healthy Pods.

Types of Services

  1. ClusterIP (default): Exposes the Service within the cluster.
  2. NodePort: Exposes the Service on each node’s IP and a static port.
  3. LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer.
  4. ExternalName: Maps a Service to an external DNS name.

Creating and Using Services

Step 1: ClusterIP Service

We’ll create a Service to expose an Nginx Deployment internally.

Deployment YAML

First, create the Deployment (nginx-deployment.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

Apply the Deployment:

kubectl apply -f nginx-deployment.yaml

Service YAML

Now, create a Service (nginx-service.yaml):

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP

Apply the Service:

kubectl apply -f nginx-service.yaml

Access the Service

1. Get the Service details:

    kubectl get service nginx-service

    2. Use the Service’s ClusterIP internally within the cluster.

    Step 2: NodePort Service

    Expose the same Deployment externally using a NodePort Service.

    Modify the Service YAML

    Change the type to NodePort (nginx-nodeport-service.yaml):

    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-nodeport-service
    spec:
      selector:
        app: nginx
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80
        nodePort: 30007
      type: NodePort
    

    Apply the Service:

    kubectl apply -f nginx-nodeport-service.yaml\

    Access the Service

    1. Find the Node’s IP address:
    kubectl get nodes -o wide

    2. Access the Service using the Node IP and NodePort (e.g., http://<NodeIP>:30007).

    Step 3: LoadBalancer Service

    Expose the application to the internet using a LoadBalancer (requires a cloud provider).

    Service YAML

    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-loadbalancer-service
    spec:
      selector:
        app: nginx
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80
      type: LoadBalancer
    

    Apply the Service:

    kubectl apply -f nginx-loadbalancer-service.yaml

    Access the Service

    1. Get the external IP:
    kubectl get service nginx-loadbalancer-service

    2. Use the external IP to access the application in a browser.

    Step 4: ExternalName Service

    Redirect traffic to an external service (e.g., Google DNS).

    Service YAML

    apiVersion: v1
    kind: Service
    metadata:
      name: external-dns-service
    spec:
      type: ExternalName
      externalName: dns.google

    Apply the Service:

    kubectl apply -f external-dns-service.yaml

    Access the Service

    Use the Service name (external-dns-service) as a DNS alias within the cluster.

    Service Discovery

    Kubernetes provides automatic service discovery via DNS.

    DNS Example

    1. Use the Service name (nginx-service) within the cluster:
    curl http://nginx-service

    2. Fully qualified domain names (FQDNs) are in the format:

    <service-name>.<namespace>.svc.cluster.local

    Load Balancing

    Kubernetes Services automatically distribute traffic across healthy Pods.

    Simulate Load Balancing

    1. Add a unique identifier to each Pod:

      env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            fieldPath: metadata.name

      2. Access the Service and see requests routed to different Pods:

      curl http://<Service-IP>

      Troubleshooting Services

      Scenario 1: No Endpoint Found

      1. Check if Pods match the Service selector:

        kubectl get pods --selector app=nginx

        2. Verify the Service configuration:

        kubectl describe service nginx-service
        

        Scenario 2: NodePort Not Accessible

        1. Ensure the NodePort is open in the firewall.
        2. Check the Node’s IP address and connectivity.

        Monitoring Services

        Using kubectl

        1. Get Service details:

          kubectl get service <service-name>

          2. Describe the Service:

          kubectl describe service <service-name>

          Using Prometheus

          1. Set up Prometheus (as explained in Chapter 4).
          2. Monitor Service-specific metrics like request rates and error rates.

          Conclusion

          By mastering Services, you’ve learned how to:

          1. Expose Pods internally and externally.
          2. Use different Service types for specific use cases.
          3. Implement load balancing and service discovery

          Leave a Reply