Part 1: Kubernetes Networking – From VLANs to BGP in the cloud

Kubernetes networking can be understood using the same fundamentals as any IP network, just applied in a dynamic environment. Let's break down how pods (the smallest deployable units of computing) communicate and how that compares to traditional networks:

Flat Pod Networks vs. VLAN segmentation

In a Kubernetes cluster, every pod gets an IP address and, by default, any pod can reach any other pod – effectively a flat network across all nodes​. You can imagine this like a huge VLAN spanning all the hosts: an overlay network that makes pods on different machines feel like they're on the same LAN​. In fact, many CNI (Container Network Interface) plugins create exactly that: an overlay network. For example, Flannel (a simple K8s CNI) uses VXLAN encapsulation – Virtual Extensible LAN, a familiar tunneling protocol – to tunnel pod traffic over the physical network​. This means pods communicate over Layer 3 infrastructure as if they shared a Layer 2 segment. Network engineers can picture each Kubernetes node as a VXLAN Tunnel Endpoint (VTEP): it encapsulates outgoing pod packets into VXLAN and decapsulates incoming ones, similar to how you'd extend VLANs over IP in a multi-data center fabric​. By abstracting the physical network, Kubernetes overlay networks achieve cross-node connectivity without requiring every switch to know about every pod.

That said, not all deployments use overlays. Some clusters use routed pod networks where each node's pod CIDR is announced to the rest of the network. In traditional terms, this is like assigning each top-of-rack switch a subnet and routing between them – except Kubernetes can automate it. Projects like Calico and Cilium support using BGP (Border Gateway Protocol) to share pod routes. BGP peering between Kubernetes nodes and your data center routers allows pods to be directly routable in the wider network.​ cisco.com This dynamic routing approach advertises pod IPs just like any other subnet in your network, promoting efficient east-west traffic flow and reducing latency by avoiding extra hops cisco.com. It's comparable to extending your enterprise routing domain into the cluster: each node runs a BGP agent (think of it like a mini router) to announce the pod networks.

Kubernetes nodes (each running Cilium) can peer with external routers via BGP, exchanging routes for pod subnets. ​ cisco.com Integrating the cluster's network seamlessly with the traditional fabric.

Whether using overlays (VXLAN/Geneve tunnels) or BGP, the goal is the same: ensure container-to-container connectivity across the cluster. Both approaches will feel familiar – either you're treating the cluster like one big VLAN or you're treating it like a routing domain with a BGP ASN. In fact, many deployments use a hybrid: for instance, a Cilium cluster might encapsulate traffic by default but also advertise service IPs via BGP for external access. The good news for network engineers is that the principles of IP networking still apply. You'll still consider subnets, routes, MTUs for encapsulation, etc., only now the configuration is handled by Kubernetes and the CNI plugin rather than manual switch/router configs. Kubernetes implements its own networking model to meet its needs (pods moving between hosts, rapid scale-up/down). This includes pod-to-pod routing, node-to-node transport, ingress/egress rules, and even load balancing​ blogs.cisco.com– essentially many functions traditionally done by switches, routers, and load balancers (tunneling, routing, nat, load balancing) are automatically now done in software within the cluster​.

blogs.cisco.

Service discovery and load balancing (K8s Services ≈ Virtual IPs)

Another key aspect of Kubernetes networking is how it handles service discovery and load balancing internally. In a Cisco network, you might use VIPs (virtual IPs) or DNS and a load balancer (like an F5 or Cisco ACE) to distribute traffic to a pool of servers. Kubernetes has the concept of a Service which provides a stable IP (or DNS name) to access a dynamic set of pods. The most common type, ClusterIP, is essentially a virtual IP and port that cluster internal clients can use, and which gets automatically load-balanced to the healthy pods behind it. Under the hood, Kubernetes uses techniques like iptables or IPVS (IP virtual server) on each node to achieve this – every node acts like a distributed load balancer that knows how to forward Service IP traffic to the correct pod endpoints. Think of a ClusterIP as analogous to a VIP configured on an L4 load balancer: traffic hitting that IP is DNATed (destination NAT) to one of the real server IPs (pod IPs). The big difference is it's all handled by kube-proxy running on the hosts, rather than a hardware appliance. For network engineers, it helps to realize that Kubernetes Services = software load balancers. The concept of pool members, health checks, and VIPs still exists, just defined in YAML instead of in a load-balancer CLI.

For traffic entering the cluster (north-south), Kubernetes integrates with external load balancers or uses NodePorts. A Service of type LoadBalancer can automatically orchestrate a cloud load balancer or a MetalLB/BGP advertisement on-prem, reserving a VIP that routes into the cluster​. isovalent.com This is similar to the traditional approach of allocating an IP on a load-balancer sitting at the network edge. The NodePort mechanism (opening a port on each node and forwarding it to the service) is analogous to configuring static port forwarding or NAT on a router/firewall – each node acts like it's doing a PAT from its node IP:nodePort into the service's pods. In summary, Kubernetes doesn't eliminate the familiar load balancing and routing tasks; it  reimplements them in a distributed, automated way. You still design how traffic flows from clients to services (ingress controllers at L7, services at L4, etc.), but Kubernetes handles the heavy lifting of programming the forwarding rules across the cluster.

Key takeaways (Networking):

  • Kubernetes gives each pod an IP on a flat network; by default it's as if all pods are on one big VLAN with full connectivity.
  • Overlays (VXLAN/Geneve) are used to tunnel pod traffic over L3 networks, much like extending a VLAN across routers​​. Every node acts as a VTEP for encapsulation/decapsulation.
  • Dynamic routing with BGP is an alternative: nodes announce pod CIDRs via BGP, making pods first-class citizens in your network routing​. This is akin to using an IGP/EGP to distribute routes for new subnets (pods) automatically.
  • Kubernetes Services function like virtual IPs with built-in load balancing. The cluster's kube-proxy on each node plays the role of an L4 load balancer, implementing VIPs and server pools (via iptables, IPVS, or eBPF).
  • NodePort Services = static NATs on the node (open a port on a host to reach a service). LoadBalancer Services = integration with an external load balancer (or MetalLB using BGP) to provide a one-to-one mapping of a service to a routable IP.

Armed with these analogies, a network engineer can see Kubernetes networking not as a mysterious black box but as a software-defined network using known concepts

Check out Part 2, where we explore how security is enforced in this model and how it compares to the ACLs and firewalls you've managed in traditional networks.

Technologies