(no title)
yorhel | 9 years ago
It wasn't an "we do this at scale" talk, but I'd love to see more experiments like it.
For the impatient: Skip to 17 minutes into the video, where he describes the previous architecture and what parts are replaced with Postgres.
1. https://fosdem.org/2017/schedule/event/postgresql_infrastruc...
dozzie|9 years ago
Well, I will be conducting such thing in near future. From ELK stack, I never used Logstash in the first place and used Fluentd instead (and now I'm using a mixture of my own data forwarder and Fluentd as a hub). I'm planning mainly to replace Elasticsearch, and probably will settle with a command line client for reading, searching, and analyzing (I dislike writing web UIs).
All this because I'm tired of elastic.co. I can't upgrade my Elasticsearch 1.7.5 to the newest version, because then I would need to upgrade this small(ish) 4MB Kibana 3.x to a monstrosity that weihgs more than whole Elasticsearch engine itself for no good reason at all. And now that I'm stuck with ES 1.x, it's only somewhat stable; it can hang up for no apparent reasonat unpredictable intervals, sometimes three times per week, and sometimes working with no problem for two months. And to add an insult to an injury, processing logs with grep and awk (because I store the logs in flat files as well as in ES) is often faster than letting ES do the job. I only keep ES around because Kibana gives nice search interface and ES provides a declarative query language, which is easier to use than building awk program.
> He even goes as far as implementing a minimal logstash equivalent (i.e. log parsing) into the database itself.
As for parsing logs, I would stay away from database. Logs should be parsed earlier and available for machine processing as a stream of structured messages. I have implemented such thing using Rainer Gerhards' liblognorm and I'm very happy with the results, to the point that I derive some monitoring metrics and was collecting inventory from logs.
awj|9 years ago
...is that really a good reason to reinvent this whole solution, though? You're basically saying you're going to spend the time to replace your entire log storage/analysis system because you object to the disk size of Kibana. (Which, without knowing your platform specifically, looks like it safely sits under 100 megs).
The rest of your complaints seem to stem from not having upgraded elasticsearch, aside from possibly hitting query scenarios that continue to be slower-than-grep after the upgrade.
Maybe I'm misunderstanding your explanation, but if I'm not this sounds like a lot of effort to save yourself tens of megs of disk space.
einhverfr|9 years ago
One reason I always store full source data in the db.
renesd|9 years ago
ngrilly|9 years ago