I was dragging my feet to build a log shipper solution. I was going to use Filebeat -> ElasticSearch -> Kibana.
This looks great. My primary attraction is possibly low memory footprint of this program over Filebeat. Secondary attraction is how easy it appears to enable transformations.
Now, if I can make a suggestion for your next/additional project.....
A neat system metric collector in Rust that can export to Prometheus with same principles.
Low memory footprint,
Rust,
Single binary,
Customizable with a single config file without spending hours in manuals,
Stdin, Stderr -> transform -> Prometheus.
I’m learning Rust and eventually plan to build such a solution but I think a lot of this project can be repurposed for what I asked much faster than building a new one.
Cheers on this open source project. I will contribute whatever I can. Thanks!!
It's still slightly rough around the edges, but Vector can actually ingest metrics today in addition to deriving metrics from log events. We have a source component that speaks the statsd protocol which can then feed into our prometheus sink. We're planning to add more metrics-focused sources and sinks in the future (e.g. graphite, datadog, etc), so check back soon!
Seems similar to Veneur (like many other projects mentioned in comments here; didn't realize this space was so crowded!) - down to the first two letters of the name: https://github.com/stripe/veneur
Veneur is more metrics-focused, but might offer inspiration as you work on metrics support in Vector - in particular the SSF source, internal aggregation, and Datadog and SignalFX sinks.
Absolutely, Veneur is something we looked at quite a bit when it popped up. It's clear Stripe was feeling a lot of the same pain points we were when we started building Vector and they've come up with something really impressive.
As you mentioned, it seems they've focused more on metrics out of the gate, while we've spent more of our time on the logging side of things (for now). We're working to catch up on metrics functionality, but interoperability via SSF is an interesting idea!
We use a rather bespoke syslog -> clickhouse log sink (https://github.com/discordapp/punt/tree/clickhouse) we wrote in house because logstash (and then subsequently elastic starch) was too slow. Would love to switch off of it and to this! Hopefully a clickhouse sink comes soon! Maybe will contribute one upstream!
Out of curiosity, could you tell us a little more about your log analysis workflow? Once they are in Clickhouse, how do you visualise/search/analyse your logs? What is your equivalent of Kibana?
Absolutely, this is likely the next integration we'll be working on. There were a few features schema-wise that we needed support before we started, but we're _very_ close. We'd love beta testers to help us build it out. Feel free to email us if you're interested: [email protected]
Just a heads up: There are several figures in your docs where the entirety of the useful information on the page is in the image and they don't have alt tags or any accessible way to get the information (that I can find anyway). e.g. https://docs.vector.dev/use-cases/security-and-compliance
Could this replace a simple fluentd setup right now or are there still major functionalities missing?
Specifically, I'm ingesting nginx logs in JSON format, cleaning up invalid UTF8 bytes (usually sent in the forwarded-for header to exploit security vulnerabilities), and sending to elasticsearch on an automated 90 day retention policy (daily indexes).
Seems like a fairly common use case for webservers.
It's very likely we're doing something wrong with this test, but after many hours of trying we couldn't get our simple test to pass for Logstash, even though it passed for others:
Telegraf is nicely done. We spent a lot of time testing solutions in our test harness (https://github.com/timberio/vector-test-harness) and Telegraf was the most impressive of the tools we tested, so kudos to the Influx team on that.
But to answer your question, telegraf is very heavily metrics focused, and their logging support appears to be limited (reducing logs to metrics only). Vector is _currently_ focused on logging with an eye towards metrics, but still has work to do on the metrics front.
For example, we opened the door with the `log_to_metric` transform (https://docs.vector.dev/usage/configuration/transforms/log_t...) to ensure our data model supports metrics, but we still have a lot of work to do when it comes to metrics as a whole. Our end goal is to eventually replace telegraf and be a single, open, vendor neutral solution for both logs and metrics.
For clarification, does "mib/s" mean "Mbit/s" (since lowercase b usually stands for bits, and uppercase B usually for Bytes)?
If yes, how comes log processing runs at only so low throughput in general?
That is not to talk down your achievements (as per your benchmark page, you do better than similar projects in terms of throughput), but I'm genuinely curious why modern machines that have 40 Gbit/s memory bandwidth are capped at (in your case) 76.7Mbit/s. What's the bottleneck?
The capitalisation is confusing, but "Mi" means "mebi" - either Mib for mebibits or MiB for mebibytes. The correct term for 1024 * 1024 bits is a Mib, and, 1024 x 1024 x8 bits is a MiB.
Hi! I work on Vector. For a motivating example, let's say you have an application fronted by nginx. Using Vector would allow you to ingest your nginx logs off disk, parse them, expose status code and response time distributions to prometheus, and store the parsed logs as JSON on S3.
There are obviously plenty of ways to accomplish that same thing today, but we believe Vector is somewhat unique in allowing you to do it with one tool, without touching your application code or nginx config, and with enough performance to handle serious workloads. And Vector is far from done! There's a ton more we're working to add moving forward (thinking about observability data from an ETL and stream processing perspective should give you a rough idea).
That's a good way to think about it! Heka was a big inspiration. The design isn't exactly the same, but we're aiming to solve a lot of the same problems.
Unfortunately deployment of Hindsight isn't as nice as Heka since you need to compile it yourself with all the Lua extensions you need, and the documentation is very disorganized.
Vector looks great on those counts, will be excited to try it if they get features like reliable Vector-Vector transport and more flexible file delimiters.
This is so exciting! An enterprise-grade solution to log workflow. To those unfamiliar with the Rust ecosystem, this project (Vector) addresses the 'L' within the ELK stack, and probably more.
There already are a lot of projects in this space.
While better performance is always great, most are already plenty fast for the majority of use cases.
The main power comes from the multitude of inputs and outputs. Vector has a lot of catching up to do there. But if they manage to offer a noteworthy performance gain... one more is always a good thing.
PS: the Logstash numbers seem suspiciously low. I'd bet it's some JVM config issue. Logstash can come to a crawl if it does not have enough memory.
It's also worth taking into account the size of the software and its relative CPU utilization. Log shippers do require CPU cycles and memory that would otherwise be available to run the other workloads on the host.
As for the multitudes of inputs/outputs, covering the 95% most-used sources and sinks is a great starting point. I think Vector got that list right in this case.
Yep, I push about 50-100MB/s through a single instance of Logstash (Redis (list) -> S3). That configuration is not in the benchmark table, but surely it's more demanding than TCP -> Blackhole, TCP -> TCP, etc.
Regardless, Vector looks very nice and I'll be testing it out :)
When people say "high performance" about these things, I wonder how they compare, for instance, with the Sandia tools.* One things that matters there is avoiding system noise (jitter) on the monitored systems with transport over RDMA.
[+] [-] reacharavindh|6 years ago|reply
This looks great. My primary attraction is possibly low memory footprint of this program over Filebeat. Secondary attraction is how easy it appears to enable transformations.
Now, if I can make a suggestion for your next/additional project..... A neat system metric collector in Rust that can export to Prometheus with same principles.
Low memory footprint,
Rust,
Single binary,
Customizable with a single config file without spending hours in manuals,
Stdin, Stderr -> transform -> Prometheus.
I’m learning Rust and eventually plan to build such a solution but I think a lot of this project can be repurposed for what I asked much faster than building a new one.
Cheers on this open source project. I will contribute whatever I can. Thanks!!
[+] [-] lukes386|6 years ago|reply
It's still slightly rough around the edges, but Vector can actually ingest metrics today in addition to deriving metrics from log events. We have a source component that speaks the statsd protocol which can then feed into our prometheus sink. We're planning to add more metrics-focused sources and sinks in the future (e.g. graphite, datadog, etc), so check back soon!
[+] [-] navaati|6 years ago|reply
[+] [-] Thaxll|6 years ago|reply
[+] [-] kalkin|6 years ago|reply
Veneur is more metrics-focused, but might offer inspiration as you work on metrics support in Vector - in particular the SSF source, internal aggregation, and Datadog and SignalFX sinks.
[+] [-] lukes386|6 years ago|reply
As you mentioned, it seems they've focused more on metrics out of the gate, while we've spent more of our time on the logging side of things (for now). We're working to catch up on metrics functionality, but interoperability via SSF is an interesting idea!
[+] [-] jhgg|6 years ago|reply
[+] [-] reacharavindh|6 years ago|reply
[+] [-] binarylogic|6 years ago|reply
[+] [-] SwellJoe|6 years ago|reply
[+] [-] kevsim|6 years ago|reply
[+] [-] nickserv|6 years ago|reply
Specifically, I'm ingesting nginx logs in JSON format, cleaning up invalid UTF8 bytes (usually sent in the forwarded-for header to exploit security vulnerabilities), and sending to elasticsearch on an automated 90 day retention policy (daily indexes).
Seems like a fairly common use case for webservers.
[+] [-] rishiloyola|6 years ago|reply
I don't know why author didn't put correctness tick mark on it.
[+] [-] binarylogic|6 years ago|reply
https://github.com/timberio/vector-test-harness/tree/master/...
Definitely open to feedback on what we're doing wrong.
[+] [-] dandigangi|6 years ago|reply
[+] [-] dm03514|6 years ago|reply
https://github.com/influxdata/telegraf
Biggest thing that pops out to me is LUA engine (seems amazing :) )
[+] [-] binarylogic|6 years ago|reply
But to answer your question, telegraf is very heavily metrics focused, and their logging support appears to be limited (reducing logs to metrics only). Vector is _currently_ focused on logging with an eye towards metrics, but still has work to do on the metrics front.
For example, we opened the door with the `log_to_metric` transform (https://docs.vector.dev/usage/configuration/transforms/log_t...) to ensure our data model supports metrics, but we still have a lot of work to do when it comes to metrics as a whole. Our end goal is to eventually replace telegraf and be a single, open, vendor neutral solution for both logs and metrics.
Happy to clarify further :)
[+] [-] envolt|6 years ago|reply
Does either of Filebeat or Logstash support config hot reload, as mentioned in the Vector's doc? https://docs.vector.dev/usage/administration/reloading
Edit - Found It - https://www.elastic.co/guide/en/logstash/current/reloading-c...
[+] [-] binarylogic|6 years ago|reply
https://github.com/timberio/vector-test-harness/tree/master/...
Logstash is not graceful. Our testing shows that they basically shut it down and start it again.
[+] [-] nh2|6 years ago|reply
If yes, how comes log processing runs at only so low throughput in general?
That is not to talk down your achievements (as per your benchmark page, you do better than similar projects in terms of throughput), but I'm genuinely curious why modern machines that have 40 Gbit/s memory bandwidth are capped at (in your case) 76.7Mbit/s. What's the bottleneck?
[+] [-] amanzi|6 years ago|reply
[+] [-] nh2|6 years ago|reply
[+] [-] binarylogic|6 years ago|reply
[+] [-] zackkitzmiller|6 years ago|reply
[+] [-] asprouse|6 years ago|reply
[+] [-] lukes386|6 years ago|reply
There are obviously plenty of ways to accomplish that same thing today, but we believe Vector is somewhat unique in allowing you to do it with one tool, without touching your application code or nginx config, and with enough performance to handle serious workloads. And Vector is far from done! There's a ton more we're working to add moving forward (thinking about observability data from an ETL and stream processing perspective should give you a rough idea).
[+] [-] timerol|6 years ago|reply
[+] [-] scurvy|6 years ago|reply
[+] [-] lukes386|6 years ago|reply
[+] [-] tveita|6 years ago|reply
https://github.com/mozilla-services/hindsight
Unfortunately deployment of Hindsight isn't as nice as Heka since you need to compile it yourself with all the Lua extensions you need, and the documentation is very disorganized.
Vector looks great on those counts, will be excited to try it if they get features like reliable Vector-Vector transport and more flexible file delimiters.
[+] [-] Dowwie|6 years ago|reply
[+] [-] the_duke|6 years ago|reply
While better performance is always great, most are already plenty fast for the majority of use cases.
The main power comes from the multitude of inputs and outputs. Vector has a lot of catching up to do there. But if they manage to offer a noteworthy performance gain... one more is always a good thing.
PS: the Logstash numbers seem suspiciously low. I'd bet it's some JVM config issue. Logstash can come to a crawl if it does not have enough memory.
[+] [-] otterley|6 years ago|reply
As for the multitudes of inputs/outputs, covering the 95% most-used sources and sinks is a great starting point. I think Vector got that list right in this case.
[+] [-] leetbulb|6 years ago|reply
Regardless, Vector looks very nice and I'll be testing it out :)
[+] [-] peter_l_downs|6 years ago|reply
[+] [-] gnufx|6 years ago|reply
* http://ovis.ca.sandia.gov/
[+] [-] heliostatic|6 years ago|reply
[+] [-] seruman|6 years ago|reply
https://flume.apache.org/
[+] [-] the_duke|6 years ago|reply