Read Part 1: Bridging Kubernetes, Cilium, & eBPF to Networking Concepts for Network Engineers

Part 2: Kubernetes Security – From ACLs to Network Policies (Micro-Segmentation)

Securing traffic inside a Kubernetes cluster may seem foreign at first, but it parallels the approaches used in securing VLANs, subnets, and inter-device traffic in a traditional data center. Kubernetes has typically used IP tables for securing the network, but has limitation at scale. With eBPF we can use Cillium to help with that scale.  The cluster's default posture is wide open (all pods can talk to all pods, as if no internal ACLs exist), so network segmentation is something you must deliberately configure – just like you would in traditional networks via inserting ACLs or firewalls between segments. Kubernetes provides Network Policy resources to define these rules. 

Let's break down the concepts:

Network Policies = Distributed ACLs/Firewalls for Pods

When network engineers ask, "what is the equivalent of firewall rules in Kubernetes?", the answer is Network Policies. A NetworkPolicy in K8s is very much like an ACL applied to a "switch port" or an interface – except here the "ports" are pods. 

Traditional extended ACLs or firewall rules let you permit/deny traffic based on source IP, destination IP, protocol, and port. Kubernetes NetworkPolicies do the same for pod traffic. They operate at OSI Layer 3/4, filtering based on IP addresses, protocol, and port​. In fact, if you look at a sample NetworkPolicy spec, it will look similar to many firewall rules sets: you specify ingress or egress rules, source/dest selectors (which can be IP blocks or pod labels), and ports/protocols to allow or block​. 

For example, you might create a NetworkPolicy that says "deny all egress from pods in namespace X to the internet" or "only allow pods labeled role=frontend to talk to pods labeled role=backend on TCP/3306". This is analogous to writing ACL entries on a router interface connecting two networks, or configuring a firewall zone policy.

​In traditional Cisco terms, "NetworkPolicy = an extended access-list (ACL) applied to pod traffic." For instance, a classic Cisco IOS ACL might say:

access-list 102 deny tcp any any eq 23  
access-list 102 permit ip any any  

This denies Telnet (TCP/23) from any to any, and permits all else. A Kubernetes NetworkPolicy can achieve the same effect for a set of pods (e.g., block Telnet, allow other traffic to backend) but expressed in YAML.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-telnet-allow-all
  namespace: my-application
spec:
  podSelector: 
    matchLabels:
      app: backend
      
  policyTypes:
    - Ingress
    
  ingress:
    # Rule 1: Deny all TCP traffic on port 23 (Telnet)
    - from:
        - podSelector: {}  # Apply to traffic from any pod
        - ipBlock:
            cidr: 0.0.0.0/0  # Matches any IP
      ports:
        - protocol: TCP
          port: 23
      action: Deny  # Some CNIs like Calico support explicit deny rules

    # Rule 2: Allow all other traffic
    - from:
        - podSelector: {}  # Allow traffic from any pod
        - ipBlock:
            cidr: 0.0.0.0/0  # Allow traffic from any IP

The enforcement of these policies is handled by the CNI plugin on each node – effectively a distributed firewall where each node filters traffic to/from its pods according to the central policy. This distributed nature is powerful, instead of a centralized firewall appliance, the "firewall" dynamically travels with each pod. Every node enforces the rules for the pods it hosts, which is like having a stateful firewall on every server interface in your network, something quite hard to do with hardware alone.

It's important to note that not all CNI plugins support NetworkPolicy, but the leading ones (Calico, Cilium, Weave Net, etc.) do​. Some, like Cilium, go above and beyond the basic spec by offering more granular, advanced policies. We'll touch on that shortly (layer 7 policy). At its core, though, the concept is the same as ACLs: you define permitted traffic and by omission everything else is denied (K8s network policies are "default deny" once applied to a pod, similar to how you'd design an ACL with an implicit deny at the end). 

In cluster deployments, you often implement a  "zero trust" approach much like a security best practice in enterprise networks where you isolate segments as much as possible. Kubernetes NetworkPolicies make zero trust (or least privilege networking) feasible by letting you isolate pods at the granularity of individual apps or microservices. 

An example of this you might isolate each application's components into their own "micro-segments" so that even if they share the same subnet (same cluster network), they can't talk unless a policy allows it. This is directly analogous to using private VLANs or VRFs plus ACLs to isolate apps in a shared network.

Namespace

A Kubernetes Namespace can also be thought of like a security zone or logical segmentation (could say its similar in function to a VLAN​). While a Namespace by itself doesn't isolate network traffic, pods in different namespaces can talk by default. 

NetworkPolicies often use Namespace as scopes or selectors. You can say "allow ingress from pods in namespace X to pods in namespace Y" – effectively creating a  policy between two logical groups.  Much like permitting traffic from VLAN10 to VLAN20 on certain ports.  The Namespace give an organizational boundary, and NetworkPolicies then act as the ACL controlling inter-namespace or inter-app traffic​. Many teams use a convention where all pods in a Namespace form a trust zone and use policies to restrict external communications, akin to firewall rules between environment tiers.

 

Moving past IPs for Identity: Identity-Based Policy with Cilium

One challenge in a highly dynamic environment like K8s is that pod IPs are ephemeral – they come and go as pods are created or rescheduled. Writing static IP-based rules (like traditional ACLs) becomes impractical. This is where Cilium's approach shines by introducing identity-based security. Instead of relying on IP addresses, Cilium assigns each endpoint (pod) an identity derived from labels​. In practice, this means you write policies (NetworkPolicy) in terms of labels (like "role=frontend can talk to role=backend"), and Cilium takes care of mapping that to the actual IPs at any given time​. 

This is conceptually similar to Cisco's  SGT (Security Group Tagging) in TrustSec or using object groups in ASA firewalls – where you don't hardcode IPs, but rather use identities or tags that are resolved dynamically. For a network engineer, the mental shift is from "IP x.x.x.x can talk to IP y.y.y.y" to "any pod with label X can talk to any pod with label Y." As pods scale out or shift around, you don't need to update the rules; policies follow the logical roles. This is a powerful form of micro-segmentation tied to workload identity, something traditional networks often implement with heavy solutions like ACI or VMware NSX. In Kubernetes with Cilium, it's built in at the software layer.

Concretely, if you have a frontend Deployment and a backend database Deployment, you might label the pods role=frontend and role=backend. A Cilium NetworkPolicy could then allow traffic from role=frontend to role=backend on port 3306 (MySQL), regardless of the pods' IPs​. Under the hood, when pods come up or change, Cilium assigns them a numeric identity based on their label set and programs eBPF rules to enforce that policy. This is like an automated, dynamic ACL that updates itself – imagine an ACL that recognizes servers by their hostname or role and auto-adjusts as servers are added or removed!

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: example-network-policy
  namespace: my-application
spec:
  podSelector:
    matchLabels:
      app: backend
  policyTypes:
    - Ingress
    - Egress
  ingress:
    # Rule 1: Allow traffic from frontend pods on port 80 (HTTP)
    - from:
        - podSelector:
            matchLabels:
              app: frontend
      ports:
        - protocol: TCP
          port: 80

 

Layer 7 Policy and Encryption – "Where's my App Firewall?"

In traditional environments, if you needed to filter at the application layer (e.g., allow only certain HTTP paths or DNS names), you'd likely deploy a Web Application Firewall (WAF) or an application-aware firewall that can do deep packet inspection. Kubernetes network plugins generally enforce L3/L4 by default, but Cilium can also enforce Layer 7 policies. Think of this as having a lightweight WAF capability as part of your network policy. For example, Cilium can restrict HTTP traffic based on paths, methods, or even enforce that egress calls only go to certain DNS names​. 

If you're familiar with layer-7 firewalls or proxies, this will sound familiar: Cilium's L7 filtering can whitelist specific API endpoints or require that traffic is HTTP GET only, etc. This is achieved by leveraging eBPF at the socket layer to inspect data (or by integrating Envoy proxies for complex protocols). It's analogous to next-gen firewall features but implemented per pod. 

However, it's not a replacement for a dedicated edge WAF – as the Cilium docs note, you would still use an external firewall or IDS/IPS for North-South web traffic​. The L7 policies are mainly to secure East-West traffic between microservices at a granular level, something network engineers historically might not have done at the firewall due to sheer policy complexity. Now it's feasible because the enforcement is distributed and can understand service context.

Another security feature to be aware of is encryption of pod traffic. In a Cisco network, you might use IPsec tunnels or MACsec on links to encrypt sensitive data in transit. 

Kubernetes CNIs like Cilium can optionally encrypt all pod-to-pod traffic (using technologies like IPsec under the hood). This is akin to automatically establishing VPN tunnels between every node so that even if someone snoops on the underlying network, the pod traffic is gibberish. It's a bit like having an overlay VPN mesh for the cluster. This might be required in compliance scenarios (similar to how one might encrypt traffic between data centers). The difference here is it can be as easy as a config toggle – the cluster's CNI handles key exchange and encryption.

Key takeaways (Security):

  • Kubernetes NetworkPolicy = ACL for pods​​. It's the primary tool to restrict traffic, analogous to how you'd use firewall rules between subnets.
  • Distributed enforcement: Instead of a central firewall, each node enforces the rules for its pods (like a firewall on every host interface). This achieves micro-segmentation at scale.
  • Namespaces as zones: Namespace can serve as logical groupings (like VLANs or VRFs)​, and policies often govern traffic between these groups (e.g., dev namespace vs prod).
  • Identity-based rules: Cilium extends policies to use labels (identities) instead of IPs​​. This is similar to using object groups or tags in traditional policy, ensuring rules adapt to dynamic IP changes.
  • L7 filtering: Advanced CNIs can filter on application-level info (HTTP methods, etc.), bringing WAF-like capabilities into the cluster​​. This is akin to having an application firewall for east-west traffic.
  • Encryption: Options to encrypt pod traffic (IPsec/WireGuard) can be enabled for defense-in-depth, much like encrypting data in transit in traditional networks.

In summary, Kubernetes adopts a "secure by policy" model that will feel natural to anyone used to crafting firewall rules. The tools and syntax differ, but whether you're writing access-list 101 or a NetworkPolicy YAML, you're ultimately implementing the same security principles. Next, we turn to observability – how do we monitor and troubleshoot this software-defined network?

 

Technologies