Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Tecnología

Parca: Open eBPF-Based Continuous Profiling

Parca: Open eBPF-Based Continuous Profiling

Actualizado: 2026-05-03

Parca[1] turns profiling into a continuous observability signal. Traditionally profiling was ad hoc — you opened pprof when something went wrong. Parca does it 24/7 across the cluster, via eBPF, with under 1 % CPU overhead. The result: you see how your services’ CPU profile evolves over time, catch performance regressions before they reach production, and debugging gains a new dimension.

Key takeaways

  • eBPF-based continuous profiling requires no application instrumentation.
  • Overhead is low enough (0.5–1 % CPU per node) to run always, not only during incidents.
  • Before/after deployment comparison is the highest-return immediate use case.
  • For compiled apps (Go, Rust, C), Parca is near-perfect; for interpreted apps, complement it.
  • Continuous profiling is the fourth observability signal, after metrics, logs and traces.

Why continuous profiling changes the rules

The usual performance debugging cycle has a structural problem: by the time you detect that something is slow, the profile from when it happened no longer exists. With Parca, that profile is always available. It stores compressed CPU samples with temporal resolution, so you can compare behaviour before and after any deployment, or reconstruct what a service was doing during a latency spike noticed forty minutes later.

Integration with the standard observability stack is straightforward. Parca exposes data that Grafana can consume, so the fourth signal slots in alongside Prometheus metrics, Loki logs and Tempo traces:

  • Metrics: what is happening?
  • Logs: what exactly occurred?
  • Traces: why did this request take so long?
  • Profiles: which function is consuming CPU in this service?

Installing on Kubernetes

Parca deploys as a DaemonSet per node (the agent) plus a central server for storage and UI:

bash
helm repo add parca https://parca-dev.github.io/helm-charts
helm install parca-agent parca/parca-agent --namespace monitoring
helm install parca parca/parca --namespace monitoring

The agent profiles all node processes by default. Filter by pod labels, namespace, or container to reduce data volume.

The interface: what you see and how to use it

The Parca UI offers four main views:

  • Temporal view: per-service CPU sampling over time, to spot when something changed.
  • Flame graph: CPU-consuming function hierarchy. Bar width is proportional to time spent; depth reflects the call stack.
  • Comparison: diff between two periods (before and after a deploy), with colour highlighting which functions worsened or improved.
  • Iceberg view: flame graph inverted to show the costliest consumers bottom-up.

How to read a flame graph:

  • Width = CPU time: wider bar means more CPU consumed by that function.
  • Height = call depth: upper bars are the outermost calls.
  • Look for wide plateaus at the top: that is where the real work happens.
  • In comparisons, red means regression, green means improvement.

For teams with no prior experience, Brendan Gregg’s tutorial — he popularised flame graphs — is the most complete reference.

Parca versus Pyroscope

Pyroscope[2] (now integrated into Grafana) has more mature multi-language support for Python, Ruby, Node, Java and .NET, and fits natively into the Grafana stack. The downside is per-language instrumentation.

Parca, in contrast:

  • A single eBPF agent profiles all languages without code changes.
  • Go, Rust and C/C++ support with debug info is excellent.
  • Java and .NET have experimental support via specific stack walkers.
  • For Python and Node, the agent sees interpreter frames, not Python code — complement with Pyroscope or py-spy.

Practical choice: Parca for Kubernetes clusters with a language mix and a zero-instrumentation policy; Pyroscope for Grafana-centric teams with already-instrumented apps.

Overhead and storage

Real numbers are comfortable:

  • CPU: 0.5–1 % per node with the agent active.
  • Storage: roughly 1 GB/day for a 50-node cluster with compressed samples.
  • Network: minimal — samples are aggregated on the node before sending.

Parca server can store in S3 (or any compatible object storage) with configurable retention policies, keeping costs predictable.

Security

The eBPF agent requires elevated permissions:

  • Privileged container for eBPF access.
  • HostPID to see host processes.
  • HostNetwork in some scenarios.

This is a significant attack surface. Scope permissions to the minimum needed, audit regularly, and consider whether the environment has hardening requirements that favour an alternative without host access.

Continuous profiling as an engineering culture

Beyond the tool, adopting continuous profiling changes how teams work:

  • Performance regressions are caught in staging, not production.
  • Optimisation decisions are made on data, not intuition.
  • New team members can explore the codebase through real flame graphs.
  • Capacity planning is based on measured CPU profiles, not estimates.

Teams that run continuous profiling permanently report significant reductions in performance-related incidents. The setup investment — a couple of hours of Helm — pays back on the first deployment where the flame graph surfaces a regression that would otherwise have reached production.

Conclusion

Parca is the fourth observability signal, and adopting it costs less than ignoring it. For compiled apps in Go, Rust or C, eBPF support is near-perfect. For interpreted apps, combine it with per-language profilers. If you already have OpenTelemetry for traces and metrics, adding Parca closes the stack. For teams serious about performance, 0.5 % CPU overhead is a very reasonable price for never debugging performance blind again.

Was this useful?
[Total: 0 · Average: 0]
  1. Parca
  2. Pyroscope

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.