eBPF has moved from an obscure kernel technology to one of the most consequential infrastructure tools of the past decade. Cilium uses it to replace kube-proxy. Falco uses it for runtime security. Pixie, Parca, and Pyroscope use it for always-on profiling without code changes. If you’re running any serious Linux workload in 2026 and haven’t developed a working understanding of eBPF, you’re operating with a significant blind spot.
This article explains how eBPF works at a practical level, surveys the production tools built on it, and gives you a realistic picture of where it adds genuine value versus where the hype outruns the reality.
What eBPF Actually Is
eBPF (extended Berkeley Packet Filter) is a kernel subsystem that allows you to run sandboxed programs inside the Linux kernel without modifying kernel source code or loading kernel modules. The programs are written in a restricted C subset, compiled to eBPF bytecode, verified by the kernel’s verifier for safety, and then JIT-compiled to native instructions.
The key architectural properties that make eBPF useful:
- Safety: The verifier statically analyzes all possible execution paths to ensure no unbounded loops, no out-of-bounds memory access, no kernel crashes.
- Performance: JIT-compiled eBPF runs at near-native speed. There is no context switch to user space.
- Hook points: eBPF programs can attach to kprobes (arbitrary kernel functions), tracepoints (stable kernel instrumentation points), XDP (network packet processing before the kernel network stack), traffic control ingress/egress, LSM hooks (Linux Security Modules), and more.
- Maps: eBPF programs communicate with user space and share state through typed key-value stores called maps. Hash maps, ring buffers, LRU maps, and per-CPU maps are all available.
The important practical implication: eBPF lets you instrument the kernel and intercept network packets without writing kernel code, without rebooting, and without modifying applications. That’s why it’s behind so much modern observability and security tooling.
Writing a Minimal eBPF Program
Understanding the mechanics helps you use the tools built on top of eBPF more effectively. Here’s a minimal tracing example using libbpf and the BPF CO-RE (Compile Once, Run Everywhere) approach:
// trace_open.bpf.c — traces every openat() syscall and logs filename + PID
#include "vmlinux.h"
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
struct event {
__u32 pid;
char filename[256];
};
struct {
__uint(type, BPF_MAP_TYPE_RINGBUF);
__uint(max_entries, 256 * 1024);
} events SEC(".maps");
SEC("tracepoint/syscalls/sys_enter_openat")
int trace_openat(struct trace_event_raw_sys_enter *ctx)
{
struct event *e;
e = bpf_ringbuf_reserve(&events, sizeof(*e), 0);
if (!e) return 0;
e->pid = bpf_get_current_pid_tgid() >> 32;
bpf_probe_read_user_str(e->filename, sizeof(e->filename),
(void *)ctx->args[1]);
bpf_ringbuf_submit(e, 0);
return 0;
}
char LICENSE[] SEC("license") = "GPL";
This program attaches to the openat syscall tracepoint, captures the PID and filename, and sends them to a ring buffer map. A user-space consumer reads from the ring buffer and processes events. The kernel verifier checks this program before it runs — ensuring the filename read is bounded, the ring buffer reservation is checked for NULL, and the program terminates.
In production you would use a framework like libbpf, BCC, or Cilium’s ebpf-go library rather than writing this by hand, but understanding the structure explains why eBPF tools have their particular capabilities and limitations.
eBPF for Observability: The Production Tools
Pixie: No-Instrumentation Application Observability
Pixie is a CNCF sandbox project (donated by New Relic) that uses eBPF to provide automatic observability for Kubernetes workloads — no SDK changes, no sidecar injections, no code deploys. It captures full request/response bodies, latency histograms, CPU profiles, and network traffic by attaching to kernel-level hooks.
# Install Pixie on a Kubernetes cluster
curl -s https://work.withpixie.ai/install.sh | bash
px deploy
# Query HTTP latency for a service (using PxL — Pixie Query Language)
px run px/http_data -start_time '-5m' -- --svc=my-service
Pixie is particularly valuable for debugging production issues without needing pre-instrumented traces. You can see the actual SQL queries a service is executing, the gRPC payloads between services, and CPU flame graphs — all from eBPF probes, all without touching application code.
Parca and Pyroscope: Continuous Profiling
Continuous profiling is one of the best applications of eBPF for production systems. Parca Agent and Grafana’s Pyroscope both use eBPF-based profiling to sample stack traces across all processes at low overhead (~1–3% CPU).
The key advantage over traditional profiling: always-on, zero application changes, works across languages including compiled binaries. When a production incident causes a CPU spike at 3am, you have the profiling data already.
# Run Parca Agent (deploys as a DaemonSet on Kubernetes)
kubectl apply -f https://raw.githubusercontent.com/parca-dev/parca-agent/main/deploy/manifests.yaml
# Or run standalone for a single host
sudo parca-agent --http-address=":7071" \
--node=my-node \
--store-address=parca.example.com:7070
Tetragon: Runtime Security Enforcement
Cilium’s Tetragon goes beyond Falco’s detection-only model. Using eBPF LSM hooks, Tetragon can not only detect suspicious behavior but enforce policies — killing a process or blocking a network connection in kernel space before the operation completes.
# Example Tetragon TracingPolicy: block any exec of /bin/bash in containers
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
name: block-shell-exec
spec:
kprobes:
- call: "security_bprm_check"
syscall: false
args:
- index: 0
type: "linux_binprm"
selectors:
- matchArgs:
- index: 0
operator: "Equal"
values:
- "/bin/bash"
- "/bin/sh"
matchActions:
- action: Sigkill
This policy kills any process attempting to exec a shell binary. No kernel patch, no reboot, and it fires before the shell ever runs. This is the kind of runtime enforcement that would have prevented several high-profile container escape attacks.
eBPF for Networking: Cilium and XDP
Cilium: Replacing kube-proxy Entirely
Cilium replaces kube-proxy with an eBPF-based dataplane that handles Kubernetes Service routing, NetworkPolicy enforcement, and load balancing entirely in the kernel — no iptables rules, no conntrack tables for east-west traffic.
The performance difference is not marginal. iptables has O(n) rule processing where n is the number of rules. eBPF maps have O(1) lookup. At scale (thousands of services, hundreds of nodes), this is the difference between the network stack being a bottleneck and not.
# Install Cilium with kube-proxy replacement
cilium install \
--set kubeProxyReplacement=strict \
--set k8sServiceHost=<API_SERVER_IP> \
--set k8sServicePort=6443
# Verify kube-proxy is fully replaced
cilium status --verbose | grep "KubeProxyReplacement"
# KubeProxyReplacement: Strict
XDP: Kernel Bypass for Packet Processing
eXpress Data Path (XDP) is an eBPF hook that runs before the kernel network stack allocates a socket buffer. At this point packets can be dropped, modified, or redirected with minimal overhead — typically 1–2 microseconds per packet versus 30–50 microseconds through the full stack.
This makes XDP practical for DDoS mitigation (dropping attack traffic at line rate without saturating the kernel), load balancing (used by Facebook’s Katran), and custom protocol implementations that don’t need kernel TCP/IP.
eBPF Limitations to Know Before Going All-In
Kernel Version Requirements
Modern eBPF features require modern kernels. BPF CO-RE requires at least 5.2. BTF (BPF Type Format, which enables CO-RE) must be compiled into the kernel. Many eBPF security features require 5.7+. In practice:
- Ubuntu 22.04 LTS ships kernel 5.15 — full support
- RHEL 8 ships kernel 4.18 — limited support, backported features
- Amazon Linux 2 ships kernel 4.14 — significant limitations
- Amazon Linux 2023 ships kernel 6.1 — full support
Before adopting any eBPF-based tool, audit your kernel versions across your fleet. Heterogeneous environments make this painful.
The Verifier Is Strict and Error Messages Are Cryptic
Writing eBPF programs directly means dealing with verifier rejections. Error messages like invalid indirect read from stack off -16+0 size 4 require understanding the verifier’s memory model. Frameworks like libbpf, ebpf-go, and BCC abstract this, but when something goes wrong in a production tool, debugging requires understanding what the verifier is enforcing.
Privileged Access Still Required
Loading eBPF programs generally requires CAP_BPF (kernel 5.8+) or CAP_SYS_ADMIN. Unprivileged eBPF exists for limited use cases but cannot access most of the useful hook points. eBPF-based tools running in containers need elevated privileges — which is an important security consideration when evaluating these tools.
Practical Adoption Path
For teams that want to start getting value from eBPF without diving into writing programs directly:
- Step 1: Deploy Cilium as your Kubernetes CNI if you’re not already locked into another. The kube-proxy replacement improves network performance and gives you identity-based NetworkPolicy enforcement.
- Step 2: Add Tetragon for runtime security visibility. Even in audit mode (no enforcement), the process and network event stream is invaluable for understanding what your workloads actually do.
- Step 3: Evaluate Parca or Pyroscope for continuous profiling. Deploy the agent as a DaemonSet and integrate with your existing Grafana stack via the Pyroscope data source.
- Step 4: Once you’ve operated these tools for a few months and developed intuition for eBPF’s model, consider building custom programs with ebpf-go for workload-specific instrumentation.
Conclusion
eBPF has fundamentally changed what’s possible in Linux observability, networking, and security. The tools built on it — Cilium, Tetragon, Pixie, Parca — are production-ready and deployed at significant scale. The performance and visibility gains are real and documented.
The entry barrier is the kernel version requirement and the mental model shift to thinking about kernel hook points and maps. But you don’t need to write eBPF programs to benefit from the ecosystem. Start with the tools, build intuition, and layer in custom programs as your needs outgrow what’s available off the shelf.
The trajectory is clear: eBPF is becoming as fundamental to Linux infrastructure as cgroups and namespaces. Teams that understand it deeply will have a durable advantage in both performance optimization and security posture.
