Show HN: Kubetail – Web-based real-time log viewer for Kubernetes
70 points| andres | 2 years ago |github.com
Kubetail is a new project I've been working on. It's a private, real-time log viewer for Kubernetes clusters. You deploy it inside your cluster and access it via a web browser, like the Kubernetes Dashboard.
Using kubetail, you can view logs in real-time from multiple Workload containers simultaneously. For example, you can view all the logs from the Pod containers running in a Deployment and the UI will update automatically as the pods come into and out of existence. Kubetail uses your in-cluster Kubernetes API so your logs are always in your possession and it's private by default.
Currently you can filter logs based on node properties such as availability zone, CPU architecture or node ID and we have plans for a lot more features coming up.
Here's a live demo: https://www.kubetail.com/demo
Check it out and let me know what you think!
Andres
[+] [-] akhenakh|2 years ago|reply
[+] [-] andres|2 years ago|reply
[+] [-] swozey|2 years ago|reply
[+] [-] andres|2 years ago|reply
[+] [-] mass_and_energy|2 years ago|reply
[+] [-] remram|2 years ago|reply
The niche between "easy to get but single-container and no search" on one side and "install with helm but search all containers including historical with full-text and metrics" on the other... seems like a tiny niche to me.
edit: oh you need to install Kubetail cluster-wide too. At least no DaemonSet I guess.
[+] [-] 392|2 years ago|reply
[+] [-] MuffinFlavored|2 years ago|reply
Kibana needs Elasticsearch
I'm not sure if this has good enough log viewing https://github.com/kubernetes/dashboard
[+] [-] piterrro|2 years ago|reply
[+] [-] andres|2 years ago|reply
[+] [-] smock|2 years ago|reply
https://github.com/kubetail-org/kubetail
as an open source repo?
[+] [-] andres|2 years ago|reply
[+] [-] flashgordon|2 years ago|reply
I suppose it is never too late :)
[+] [-] nodesocket|2 years ago|reply
I believe there is no persistence, or does it cache in local storage or anything on the client? Would be awesome to have that option for client side storage for perhaps 24 hours.
[+] [-] andres|2 years ago|reply
Currently, there's no persistence. I'll think about how to enable client-side.
[+] [-] hobofan|2 years ago|reply
[+] [-] andres|2 years ago|reply
[+] [-] smcleod|2 years ago|reply
[+] [-] andres|2 years ago|reply
[+] [-] distracteddev90|2 years ago|reply
[+] [-] andres|2 years ago|reply
[+] [-] cryptonector|2 years ago|reply
I use it for tailing logs remotely, naturally, and as a poor man's Kafka. Between regular file byte offsets, ETags, and conditional requests one can build a reliable event publication system with this `tailfhttpd`. For example, and event stream can name the next instance ({local-part, ETag}) then be renamed out of the way to end in-progress `GET`s, and clients can resume from the new file.
With a few changes it could "tail" (watch) directories, and even allow `POST`ing events (which could be done by writing to a pipe the reader of which routes events to files that get served by `tailfhttpd`).
Because `tailfhttpd` just serves files, and because of the ETag thing, conditional requests, and xattrs, it's very easy to build more complex systems on top of it -- even shell scripts will suffice.
This chunked-encoding, "hanging-GET" thing is so unreasonably effective and cheap that I'm surprised how few systems support it.
I've visions of rewriting it in Rust and supporting H2 and especially H3/QUIC to reduce the per-client load (think of TCP TCBs and buffers) even more, and using io_uring instead of epoll for even better performance.
Oh, and this approach is fully standards-compliant. It's just a chunked-encoding, indefinite-end ("hanging") GET with all the relevant (but optional) behaviors (ETags, conditional requests, range requests, even the right end of the byte-range being left unspecified is within spec!).
[+] [-] arcza|2 years ago|reply