Kubernetes Notes - Networking
For applications running inside Kubernetes to function correctly, they must be able to communicate with each other and with external systems. Kubernetes provides a networking model that enables communication between Pods, Services, and external clients.
Table of Contents
Pod Internal
Each Pod receives its own internal IP address inside the Kubernetes cluster. This allows Pods to communicate directly with each other using that IP.
Let’s spin up a pod to demonstrate.
1kubectl run nginx --image=nginx -n demo
2pod/nginx created
Get the IP of the pod.
1kubectl describe pod/nginx -n demo | grep IP:
2 cni.projectcalico.org/podIP: 172.16.235.48/32
3IP: 172.16.235.48
4 IP: 172.16.235.48
Pods are ephemeral, if a Pod is restarted or replaced, its IP may change. Because of this, applications should not rely on Pod IP addresses directly. Kubernetes solves this with Services.
Services
Instead of connecting directly to Pod IPs, clients communicate with a Service IP, and Kubernetes forwards the traffic to the appropriate Pods.
Create Using YAML
Make sure to check the label and selector on the pod created earlier using.
1kubectl describe pod/nginx -n demo
2
3# or
4kubectl get pods --show-labels -n demo
5NAME READY STATUS RESTARTS AGE LABELS
6nginx 1/1 Running 0 26m run=nginx
Or just redeploy the pod with your own label and selector.
svc-nginx.yaml
1apiVersion: v1
2kind: Service
3metadata:
4 name: nginx
5 namespace: demo
6spec:
7 selector:
8 run: nginx
9 ports:
10 - protocol: TCP
11 port: 80
12 targetPort: 80
13 type: LoadBalancer
Important note is that you should always point the targetPort to the image exposed port. port can be set to anyport, this is also inherited by LoadBalancer and Ingress when exposing outside the Cluster.
1kubectl create -f srv-nginx.yaml
Create using kubectl
1kubectl expose pod nginx \
2 --name=nginx \
3 --type=LoadBalancer \
4 --port=80 \
5 -n demo
6service/nginx exposed
Verify
Check service created. Usually pod don’t get assign External-IP if Metallb is not configured or a Ingress Controller is configured - I’ll discuss this on another post.
1kubectl get svc -n demo
2NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
3nginx LoadBalancer 10.43.152.217 192.168.254.220 80:32311/TCP 8s
Now Pods can access the service using this stable IP.
Let’s spin another pods to access the pod using its IP, make sure it is running in the same Namespace. Also make it run in infinite.
1kubectl run busybox --image=busybox -n demo -- sleep infinity
Exec into the pod and wget the the IP.
1kubectl exec pod/busybox -n demo -- wget -q -O - 10.43.152.217
2<!DOCTYPE html>
3<html>
4<head>
5<title>Welcome to nginx!</title>
6<style>
7html { color-scheme: light dark; }
8body { width: 35em; margin: 0 auto;
9font-family: Tahoma, Verdana, Arial, sans-serif; }
10</style>
11</head>
12<body>
13<h1>Welcome to nginx!</h1>
14<p>If you see this page, the nginx web server is successfully installed and
15working. Further configuration is required.</p>
16
17<p>For online documentation and support please refer to
18<a href="http://nginx.org/">nginx.org</a>.<br/>
19Commercial support is available at
20<a href="http://nginx.com/">nginx.com</a>.</p>
21
22<p><em>Thank you for using nginx.</em></p>
23</body>
24</html>
Service DNS
Kubernetes automatically creates internal DNS entries for Services. Instead of remembering IP addresses, services can be accessed using DNS.
Format:
1service-name.namespace.svc.cluster.local
For the nginx service we created, it is formatted like this:
1nginx.demo.svc.cluster.local
Same Namespace
Now using the same example earlier, we’ll access the pod using Service DNS this time.
1kubectl exec pod/busybox -n demo -- wget -q -O - nginx.demo.svc
2<!DOCTYPE html>
3<html>
4<head>
5<title>Welcome to nginx!</title>
6<style>
7html { color-scheme: light dark; }
8body { width: 35em; margin: 0 auto;
9font-family: Tahoma, Verdana, Arial, sans-serif; }
10</style>
11</head>
12<body>
13<h1>Welcome to nginx!</h1>
14<p>If you see this page, the nginx web server is successfully installed and
15working. Further configuration is required.</p>
16
17<p>For online documentation and support please refer to
18<a href="http://nginx.org/">nginx.org</a>.<br/>
19Commercial support is available at
20<a href="http://nginx.com/">nginx.com</a>.</p>
21
22<p><em>Thank you for using nginx.</em></p>
23</body>
24</html>
Notice that I only use nginx.demo.svc, this is valid because both pod are using the same namespace.
Cluster Wide
Let’s access nginx pod inside demo namespace in a different namespace, let’s call it universe.
Deploy busybox in universe namespace.
1kubectl create ns universe
2namespace/universe created
3
4kubectl run busybox --image=busybox -n universe -- sleep infinity
5pod/busybox created
1kubectl exec pod/busybox -n universe -- wget -q -O - nginx.demo.svc.cluster.local
2<!DOCTYPE html>
3<html>
4<head>
5<title>Welcome to nginx!</title>
6<style>
7html { color-scheme: light dark; }
8body { width: 35em; margin: 0 auto;
9font-family: Tahoma, Verdana, Arial, sans-serif; }
10</style>
11</head>
12<body>
13<h1>Welcome to nginx!</h1>
14<p>If you see this page, the nginx web server is successfully installed and
15working. Further configuration is required.</p>
16
17<p>For online documentation and support please refer to
18<a href="http://nginx.org/">nginx.org</a>.<br/>
19Commercial support is available at
20<a href="http://nginx.com/">nginx.com</a>.</p>
21
22<p><em>Thank you for using nginx.</em></p>
23</body>
24</html>
Service Types
Kubernetes supports different Service types depending on how the application should be accessed.
ClusterIP (Default)
ClusterIP exposes a Service inside the cluster only. Used for internal communication between microservices.
1apiVersion: v1
2kind: Service
3metadata:
4 name: nginx
5 namespace: demo
6spec:
7 selector:
8 run: nginx
9 ports:
10 - protocol: TCP
11 port: 80
12 targetPort: 80
13 type: ClusterIP
NodePort
A NodePort Service exposes the application on a port of every node in the cluster. NodePort allows external access but requires knowing the node IP and port.
Port range:
130000 – 32767
1apiVersion: v1
2kind: Service
3metadata:
4 name: nginx
5 namespace: demo
6spec:
7 selector:
8 run: nginx
9 ports:
10 - protocol: TCP
11 port: 80
12 targetPort: 80
13 nodePort: 30080
14 type: NodePort
To test curl the nodeport on one of the node.
1curl localhost:30080
LoadBalancer
A LoadBalancer Service creates an external load balancer through the cloud provider or metallb. This is the type we used in the example.
1apiVersion: v1
2kind: Service
3metadata:
4 name: nginx
5 namespace: demo
6spec:
7 selector:
8 run: nginx
9 ports:
10 - protocol: TCP
11 port: 80
12 targetPort: 80
13 type: LoadBalancer
1kubectl get svc -n demo
2NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
3nginx LoadBalancer 10.43.152.217 192.168.254.220 80:32311/TCP 8s
To test, curl on the external IP the pod attached to.
1curl 192.168.254.220
hostPort and hostNetwork
By using hostPort and hostNetwork, you are attaching the pod directly to your Node ports.
Their difference is that hostPort still need to communicate with Kubernetes CNI plugin to map the container/pod port to the Host network. For hostNetwork, the container attached it self to the host network, becareful when handling this specially if your container is using Privilege Port (e.g. ssh-22), this would make you unable to ssh to your node.
Look at this example to further understand. Deploy both YAML file.
nginx-hostport.yaml
1apiVersion: v1
2kind: Pod
3metadata:
4 name: nginx-hostport
5spec:
6 containers:
7 - name: nginx-hostport
8 image: nginx
9 ports:
10 - containerPort: 80
11 hostPort: 8080
Notice that you can set the Port you can attached to.
nginx-hostnetwork.yaml
1apiVersion: v1
2kind: Pod
3metadata:
4 name: nginx-hostnework
5spec:
6 hostNetwork: true
7 containers:
8 - name: nginx-hostnetwork
9 image: nginx
10 ports:
11 - containerPort: 80
In here you are stuck with port the image is exposing.
Verify what node are the pods are deployed.
1NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
2nginx-hostnework 1/1 Running 0 5m30s 192.168.254.202 master02 <none> <none>
3nginx-hostport 1/1 Running 0 5m35s 172.16.235.61 master03 <none> <none
Notice that hostnetwork attached to the host IP, while hostPort is still using CLusterIP.
Use netstat to verify network attached. SSH to the node the pod attached to.
1# hostPort
2tcp 0 0 192.168.254.203:56836 172.16.235.21:8080 TIME_WAIT -
3
4# hostNetwork
5sudo netstat -anp|grep 80
6tcp6 0 0 :::80 :::* LISTEN 483234/nginx: maste
Curl to verify (master02 IP - 192.168.254.203, master03 IP: 192.168.254.203).
1# hostPort
2curl 192.168.254.203:8080
3<!DOCTYPE html>
4<html>
5<head>
6<title>Welcome to nginx!</title>
7<style>
8html { color-scheme: light dark; }
9body { width: 35em; margin: 0 auto;
10font-family: Tahoma, Verdana, Arial, sans-serif; }
11</style>
12</head>
13<body>
14<h1>Welcome to nginx!</h1>
15<p>If you see this page, the nginx web server is successfully installed and
16working. Further configuration is required.</p>
17
18<p>For online documentation and support please refer to
19<a href="http://nginx.org/">nginx.org</a>.<br/>
20Commercial support is available at
21<a href="http://nginx.com/">nginx.com</a>.</p>
22
23<p><em>Thank you for using nginx.</em></p>
24</body>
25</html>
26
27# hostNetwork
28 curl 192.168.254.202:80
29<!DOCTYPE html>
30<html>
31<head>
32<title>Welcome to nginx!</title>
33<style>
34html { color-scheme: light dark; }
35body { width: 35em; margin: 0 auto;
36font-family: Tahoma, Verdana, Arial, sans-serif; }
37</style>
38</head>
39<body>
40<h1>Welcome to nginx!</h1>
41<p>If you see this page, the nginx web server is successfully installed and
42working. Further configuration is required.</p>
43
44<p>For online documentation and support please refer to
45<a href="http://nginx.org/">nginx.org</a>.<br/>
46Commercial support is available at
47<a href="http://nginx.com/">nginx.com</a>.</p>
48
49<p><em>Thank you for using nginx.</em></p>
50</body>
51</html>
Reverse Proxy
Instead of exposing many services individually, a reverse proxy such as NGINX can act as a gateway in front of the cluster. This is not ideal and can be a hassle, since you’ll be creating tunnel going to a VPS with a public IP or using Cloudflare tunnel, where you have to create manual entry for every service you want to expose in the internet.
Some popular service:
- caddy
- nginx reverce proxy
- pangolin
- cloudflare
Ingress Controllers
An Ingress Controller manages external access to services in a Kubernetes cluster. It acts as a reverse proxy inside the cluster, routing traffic to the correct services. One popular controller is Ingress-Nginx.
This goes hand in hand with cert-manager, which handles automatic creation of certificate and certificate renewal.
This is also quite a big topic, start with ingress-controller and then check if your domain provider is supported with cert-manager. I’m using Cloudflare which has it’s own certificate authority to handled certifcate inside kubernetes.
Example:
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 name: marktaguiad-dev-ingress
5 cert-manager.io/issuer-kind: OriginIssuer
6 cert-manager.io/issuer-group: cert-manager.k8s.cloudflare.com
7spec:
8 ingressClassName: traefik
9 tls:
10 - hosts:
11 - marktaguiad.dev
12 secretName: marktaguiad-dev-tls
13 rules:
14 - host: marktaguiad.dev
15 http:
16 paths:
17 - path: /
18 #pathType: ImplementationSpecific
19 pathType: Prefix
20 backend:
21 service:
22 name: marktaguiad-dev
23 port:
24 number: 80
Container Network Interface (CNI)
In Kubernetes, a CNI (Container Network Interface) is the plugin system that handles networking for pods. Kubernetes itself does not implement networking—it delegates it to a CNI plugin.
- Provides network connectivity between pods and services
- Assigns IP addresses to pods
- Configures routing and network isolation (optional
Notice when you create a k8s cluster using kubeadm the nodes are stuck in NotReady state unless you install CNI.
First install the cni network plugin.
1cd /opt/cni/bin
2wget https://github.com/containernetworking/plugins/releases/download/v1.9.0/cni-plugins-linux-amd64-v1.9.0.tgz
3
4# extract in /opt/cni/bin/
5tar -xvf cni-plugins-linux-amd64-v1.9.0.tgz
Now install CNI plugins, for easy setup and good for learning and development I recommend flannel. For production grade, use calico, this is more than enough. Research more on this topic if you are interested.
Let’s install calico.
1kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.31.4/manifests/calico.yaml