top | item 39980463

(no title)

tkone | 1 year ago

If you’re debugging something simple or non-distributed, this product isn’t for you.

If you’re working on anything distributed, log aggregation becomes a must. But, also, if you’re working on anything distributed and you’re looking at logs, you’re desperate. Distributed traces are so much higher quality.

discuss

order

umanwizard|1 year ago

When I formed these opinions I was working on Materialize, which is basically the polar opposite of "simple and non-distributed". However it was still quite common that I knew exactly which process was doing something weird and unexpected.

jhrmnn|1 year ago

Maybe it’s the difference between tracking a bug (abnormal operation) vs understanding behavior of a complex system (normal operation)?

mason55|1 year ago

Yup and the reason no one markets something like "tail the logs for server X" is because, if you're talking in the context of an individual server, you're too small for anyone to care about.

ta1243|1 year ago

I've got logs from hundreds of servers that I use standard tools to look at, and that's a small system. Centralising logs has been a thing for decades.

zo1|1 year ago

Sorry, did plenty of "distributed" tracing back in the day and this is just not the case. I can't help but feel like you're after-the-fact rationalizing as if you need this for diagnosing anything "distributed" or "complicated".

Distributed anything is actually easier in most cases because you will always have input and output. Sure, if you're debugging a complicated and coordinated "dance" between two concurrent threads/processes then yeah fully agreed, but then you're deep in uncharted territory and you need all the help you can get.