Vector, the Datadog observability agent, reached version 1.0 in 2022 and has matured through 2023 and 2024 into a serious option against Fluent Bit, Fluentd, and Logstash. Written in Rust, with its own transformation language called VRL, and support for dozens of sources and destinations, it occupies a specific niche: complex log and metric transformations at the node, before sending to destination.
What Distinguishes Vector
Vector’s proposition is threefold. First, performance. Being written in Rust, it consumes relatively little — typically between 30 and 100 megabytes of memory in operation — though more than Fluent Bit, which is even lighter. Second, transformations. VRL (Vector Remap Language) allows rewriting, enriching, filtering, and pivoting events with powerful declarative syntax. Third, multi-source support: the same agent handles logs, metrics, and traces from dozens of origins to dozens of destinations.
For teams needing complex observability pipelines — not just collect and send — Vector is reference tool.
Vector Remap Language
VRL is the main differentiator against Fluent Bit. While Fluent Bit uses relatively limited chained filters, Vector allows writing expressive transformations that look like code but are declarative. A typical example would normalise an IP field, extract Kubernetes pod metadata, calculate derived severity, and enrich with geolocation — all in a config file.
The language has explicit typing, error handling, predefined functions for parsing common formats, and ability to unit-test transformations. For teams previously writing Lua scripts in Fluent Bit for the same purpose, VRL is more maintainable.
Against Fluent Bit
Fair comparison acknowledges both excel in different terrain. Fluent Bit is lighter, has been production-tested massively for more years, and has solid CNCF ecosystem. It’s the default choice for simple log collection in Kubernetes with high pod density.
Vector wins when transformations are non-trivial. If the pipeline needs several enrichment steps, varied format parsing, complex filtering, and multiple simultaneous destinations, VRL significantly simplifies versus chains of Fluent Bit filters with Lua scripting. Also wins when observability mixes logs, metrics, and traces in the same agent.
Against Logstash
Logstash is the traditional Elastic stack agent. Works but has reputation of consuming quite a lot of memory — typically a gigabyte or more — and doesn’t scale well at density. For modern Kubernetes environments, Vector is natural replacement: Rust vs JVM, modern transformations vs Ruby plugins.
Migration from Logstash to Vector is real but viable project. Logstash Grok transformations have direct or adaptable VRL equivalents.
Typical Use Cases
Vector shines in heterogeneous-observability situations. Consider a company with applications in Kubernetes, databases on virtual machines, serverless services, and legacy servers with syslog. Each source produces different formats. A centralised Vector agent can consume all, normalise to common schema, enrich with metadata, and distribute to several destinations — Loki for hot logs, S3 for archive, Datadog for executive dashboards, Elasticsearch for legal audit.
Without Vector, each pipeline has its own agent, with scattered configurations and fragmented maintenance. With Vector unifying, visibility is consistent and maintenance centralised.
Honest Limitations
Vector isn’t universal replacement. For very simple pipelines — collect container logs and send to Loki — Fluent Bit is lighter and requires less configuration. For users already deep in Elastic ecosystem, Logstash may be more natural by integration. Vector’s support for certain sources is less mature than specific alternatives.
Additionally, VRL learning curve has cost. A team used to Fluent Bit needs several weeks to master VRL fluently. Investment pays off for complex pipelines but not for trivial cases.
Datadog Integration
Vector is open source project but Datadog-maintained, explaining its natural integration with Datadog products. Teams already Datadog customers get commercial support and tool synergy.
However, the project is genuinely open source. Works equally well sending data to Loki, Elasticsearch, Splunk, Kafka, or any other destination. Doesn’t require Datadog account to be useful, and MPL 2.0 licence gives real guarantees.
Kubernetes Deployment
Vector typically deploys as DaemonSet in Kubernetes, similar to Fluent Bit. Official Helm chart is available and covers usual patterns. For mixed infrastructure, can run as systemd service on traditional virtual machines, with equivalent configuration.
Recommended pattern is per-node agent plus centralised aggregator. Agents do light collection and send to aggregators, which perform heavy transformations. This reduces load on productive nodes and centralises transformation logic.
Vector Observability
The agent itself exposes Prometheus metrics on events processed, transformations applied, errors, and latency. A Grafana dashboard dedicated to Vector is sensible — the agent collecting logs also deserves monitoring.
Roadmap and Future
The project has frequent releases, active community, and public roadmap. Datadog invests significant resources. OpenTelemetry integration advances, allowing Vector to receive OTel data and redirect after transformation. For 2025 we expect even broader source and destination support.
Conclusion
Vector is appropriate choice when observability requires non-trivial transformations and multi-source consolidation. For simple pipelines, Fluent Bit remains lighter and more pragmatic. For legacy Elastic ecosystems, Logstash may remain more natural. Pragmatic decision depends on real pipeline complexity and appetite for learning VRL. For teams with mature observability already managing multiple sources and destinations, investing in Vector reduces fragmentation and improves maintainability. For teams just starting, Fluent Bit remains reasonable entry point — Vector arrives when complexity justifies.
Follow us on jacar.es for more on observability, log agents, and modern architectures.