Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Arquitectura Tecnología

Cilium and the Future of Container Networking with eBPF

Cilium and the Future of Container Networking with eBPF

Actualizado: 2026-05-03

Cilium[1] is the project redefining how Kubernetes networking works. Unlike traditional CNIs such as Calico or Flannel, which rely on iptables, Cilium replaces much of the network stack with eBPF[2] programs loaded directly into the kernel. The result: significantly better performance, deeper visibility, and more expressive security policies.

Key takeaways

  • iptables scales linearly O(n): with thousands of services, node CPU saturates just managing rules.
  • Cilium loads eBPF programs at kernel hook points (XDP, tc, socket, cgroup) and uses O(1) hash-table lookups instead of linear chains.
  • Documented benchmarks show 30-50% lower p95 latency, 2-3x more throughput, and up to 70% less kernel CPU.
  • Hubble adds real-time observability without needing tcpdump or conntrack.
  • Migration from another CNI is possible incrementally; staging tests are recommended first.

The problem with iptables

iptables has been the Linux networking standard since 1998. In small K8s clusters it works fine. In large clusters it’s a measurable bottleneck for three reasons:

  • O(n) scaling. Each new rule adds an entry at the end of the chain. With 5,000 services, every packet traverses thousands of comparisons.
  • Expensive updates. Changing a rule requires recreating the entire chain, with a performance hit during the change.
  • Opaque debugging. Tracking why a specific packet was accepted or dropped requires deep expertise.

kube-proxy in iptables mode reflects all these problems: with thousands of services, node CPU saturates just managing rules, before processing any actual payload.

What makes Cilium different

Cilium loads eBPF programs at kernel hook points (XDP, tc, socket, cgroup). When a packet arrives:

  • The eBPF program processes it directly in the kernel, bypassing the conventional networking stack.
  • O(1) hash-table lookups replace iptables’ linear chains.
  • No data copy between user and kernel space for network operations.

Documented benchmarks show:

  • Latency reduction: 30-50% lower p95 latency vs iptables under service-mesh loads.
  • Higher throughput: 2-3x packets per second on large nodes.
  • Lower CPU consumption: up to 70% less kernel CPU in clusters with thousands of services.
eBPF networking architecture diagram showing XDP, tc, and socket hooks in the Linux kernel

Capabilities beyond basic CNI

Cilium goes far beyond “pod networking”. The most relevant additional capabilities are:

L7 network policies

Cilium can apply policies not only by IP/port (L3/L4) but by application content (L7). The following example permits only GET requests to /api/v1/.* from the frontend:

yaml
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: "allow-frontend-api"
spec:
  endpointSelector:
    matchLabels:
      app: api
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: frontend
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP
      rules:
        http:
        - method: "GET"
          path: "/api/v1/.*"

With iptables this granularity isn’t possible — it would require an additional proxy.

Integrated observability with Hubble

Hubble[3] is Cilium’s observability layer. It shows in real time which packets are processed, which policies apply, and which connections are established — a dashboard replacing much of what previously required tcpdump + conntrack. This visibility directly complements what Pixie offers for Kubernetes observability: while Pixie captures application telemetry, Hubble covers the network layer.

Transparent encryption

Cilium can encrypt all pod-to-pod traffic with WireGuard or IPsec, enabled with a single flag. Without an additional service mesh.

Ambient mesh mode (Istio integration)

Istio launched ambient mesh[4] removing sidecars via ztunnel + waypoint. Cilium integrates as the CNI enabling parts of that design, with performance traditional sidecars couldn’t match.

Cilium with Hubble architecture diagram showing the observability layer over eBPF

When to adopt Cilium

Cilium makes clear sense in these scenarios:

  • Large K8s clusters (>50 nodes, >1000 services): scale benefits are evident.
  • Service mesh without sidecars: Cilium with Hubble + L7 policies can replace much of traditional Istio.
  • Strict security requirements: L7 policies, transparent encryption, strong service identity.
  • Deep observability needed: Hubble provides visibility other CNIs don’t natively offer.

Cases where it may not pay off:

  • Small clusters (<20 nodes): Cilium’s operational overhead outweighs the benefits.
  • Old kernel (<4.19): without a modern kernel, eBPF features are limited.
  • Teams without eBPF experience: Cilium debugging requires eBPF understanding — real learning curve.

Migration from another CNI

Migrating from Calico or Flannel to Cilium isn’t trivial, but the process is well documented. Typical steps:

  1. Test in staging with a dedicated node pool.
  2. Rolling deployment replacing Calico node by node (the app must tolerate the brief per-pod reconnection).
  3. Validate existing network policies: syntax is similar but there are subtle differences.
  4. Gradual activation of advanced features (Hubble, encryption, L7 policies) after stabilisation.

Organisations including Datadog, Adobe, Bell Canada, and AWS itself in their EKS service[5] have documented successful migrations.

To complete the observability stack on top of Cilium, also see OpenTelemetry as unified standard and the Grafana stack for logs, traces, and metrics. The concept of eBPF as monitoring technology extends to Pixie as well — all share the same kernel primitives.

Conclusion

Cilium isn’t just another CNI: it’s an architectural shift putting eBPF at the centre of Kubernetes networking. For large clusters or with advanced security and observability requirements, it’s hard to justify not adopting it. For small clusters, the recommendation is to know the project and be ready for the transition when growth justifies it.

Was this useful?
[Total: 11 · Average: 4.5]
  1. Cilium
  2. eBPF
  3. Hubble
  4. ambient mesh
  5. AWS itself in their EKS service

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.