Cilium Service Mesh: When You Don’t Need Sidecars
Actualizado: 2026-05-03
Cilium[1] started as an eBPF-based CNI and evolved into a complete alternative to traditional service meshes — without sidecars. Its architecture leverages eBPF to do policy, observability, and encryption in kernel, without per-pod proxy. For large clusters, resource savings are significant. Istio responded with Ambient Mode (similar philosophy). This article compares the sidecarless approach and when to pick each.
Key takeaways
- Cilium eliminates sidecar overhead using eBPF in kernel: ~5 GB RAM total overhead in a 100-node cluster vs ~100 GB for classic Istio.
- Per-node WireGuard encryption is less granular than per-service mTLS but much more efficient.
- For L7 policies (HTTP verbs, paths), Cilium starts an Envoy per node on-demand — not per pod.
- Hubble is the integrated observability layer: flows, dependency maps, and policy verdicts without additional tools.
- Cilium has more ground in sidecarless than Istio Ambient (GA 2023 vs GA 2024).
The Sidecar Problem
The traditional sidecar model (Linkerd, classic Istio) has real costs:
- One Envoy/linkerd-proxy per pod.
- Resource overhead: 50-200 MB RAM + CPU per pod.
- Additional latency: 2-5ms round-trip.
- Operational complexity: many processes, lifecycle management.
In clusters with thousands of pods, multiplied by sidecar, the numbers are significant.
Cilium’s Approach
Cilium replaces sidecar proxies with:
- eBPF in kernel for simple policy and encryption.
- Centralised Envoy per node for complex L7 features (only where used).
- Hubble for native observability.
- CNI integration — Cilium is both CNI and service mesh in one piece.
mTLS with eBPF
Cilium encrypts inter-node traffic with WireGuard (simple and fast) or IPsec (more compatible), without sidecar injection. WireGuard per node, not per pod — less granular than per-service mTLS but more efficient.
Hubble: Integrated Observability
Hubble provides:
- Detailed per-connection flow logs.
- Service dependency maps (who talks to whom).
- Policy verdicts (why a request was allowed/denied).
- Prometheus and Grafana integration.
Functional equivalent to Kiali + linkerd-viz but integrated without additional tools.
Cilium vs Istio Ambient
| Aspect | Cilium | Istio Ambient |
|---|---|---|
| Kernel layer | Native eBPF | iptables + ztunnel |
| L4 encryption | Per-node WireGuard | Per-identity mTLS in ztunnel |
| L7 features | Per-node Envoy on-demand | Per-namespace Waypoint |
| CNI integration | Native | Separate |
| Sidecarless maturity | GA 2023 | GA 2024 |
Cilium has more ground in sidecarless. Istio Ambient is newer but inherits Istio’s mature ecosystem.
Resource Comparisons
Orientation benchmark (cluster 100 nodes, 1000 pods):
| Stack | Total overhead |
|---|---|
| Classic Istio (sidecars) | ~100 GB RAM |
| Linkerd | ~10 GB RAM |
| Cilium + CNI | ~5 GB RAM |
| Istio Ambient | ~15 GB RAM |
When to Choose Cilium
Good fits:
- Large clusters (more than 500 pods) where sidecar overhead matters.
- Teams with eBPF experience or willing to invest.
- Greenfield Kubernetes without legacy CNI.
- L7 policy needs with high throughput.
- Multi-cluster with advanced connectivity requirements.
When NOT Cilium
- Small cluster where sidecars aren’t a real problem.
- Already running Istio with complex features — migration doesn’t pay.
- Team without low-level networking experience.
- Fine per-pod identity requirements (prefer Istio Ambient).
Conclusion
Cilium represents a genuine service-mesh evolution: sidecarless, eBPF-native, CNI-integrated. For large clusters and technically capable teams, it offers real resource and feature advantages. Not the right choice for everyone: Linkerd remains valid for simplicity, classic Istio for feature-completeness, Istio Ambient as sidecarless alternative with different trade-offs. The service-mesh choice in 2024 has more mature options than ever; the decision should be based on your technical context and team, not trend.