Can it rebalance partitions? And delete crap from topics?
Every time we have to call up our ops guys for this; it’s like “deep breath, first utter some indescribable magic aws iam nonsense and somehow get an ephemeral shell on some rando bitnami Kafka image’s ./kafka-topic shell scripts to work over the next few hours” and ultimately succeed but with deep regrets
If you have to manually delete data from topics, you are doing Kafka wrong. The whole point of it is high speed data throughput, so that something automated does it for you downstream.
Hi! kaskade is more like AKHQ than Cruise control.
We use strimzi by default, so we deploy cruise control with kafka, and it takes care of rebalancing the data across the nodes. Also you can deploy it without strimzi.
Delete crap is more complicated, usually with kafka-delete-records (this is king of new I think). The problem is the offsets. By general rule you should not delete data from topics
In Textual, I wanted to write a logging app with a big data table and filtering, and I'd hoped that it would be a bit more straightforward to write an optimized one
I guess it would be a nice addition to have some kind of FilterableDataTable with history, filtering, caching, and fast rendering
I guess you probably developed something like that for this tool, perhaps you could share it in Textual, or some kind of "textual widgets extension lib"?
Hi! I did not implemented something like that, but I can say this year textual has a great set of features and a nice community willing to help and share. I totally recommend textual.
Does it support Protobuf deserialization via Schema Registry? This is basically where every other tool falls apart. Kafka UI just added support very recently but kcat falls apart.
Hi! it validates if the message was (or not) generated for schema registry, it checks if the message has the schema registry magic bytes (this bytes has the schema id). so, it deserializes messages with or without schema registry.
This is the drawback: you have to generate the descriptor first.
1) download the schema from schema registry:
http :8081/schemas/ids/<my-id>/schema
2) generate the descriptor with the schema:
protoc --include_imports --descriptor_set_out=my-descriptor.desc --proto_path=. my-schema.proto
genuinely surprised that kafka is something unknown, thought it had gained ubiquitous status similar to k8s but maybe that's me walking into a Baader–Meinhof effect
cyberpunk|1 year ago
Every time we have to call up our ops guys for this; it’s like “deep breath, first utter some indescribable magic aws iam nonsense and somehow get an ephemeral shell on some rando bitnami Kafka image’s ./kafka-topic shell scripts to work over the next few hours” and ultimately succeed but with deep regrets
kvakerok|1 year ago
sauljp|1 year ago
We use strimzi by default, so we deploy cruise control with kafka, and it takes care of rebalancing the data across the nodes. Also you can deploy it without strimzi.
Delete crap is more complicated, usually with kafka-delete-records (this is king of new I think). The problem is the offsets. By general rule you should not delete data from topics
rockwotj|1 year ago
You have to rewrite the whole topic to do this right? (or do some hacks with compaction if you have unique keys)
oulipo|1 year ago
I guess it would be a nice addition to have some kind of FilterableDataTable with history, filtering, caching, and fast rendering
I guess you probably developed something like that for this tool, perhaps you could share it in Textual, or some kind of "textual widgets extension lib"?
sauljp|1 year ago
kvakerok|1 year ago
sauljp|1 year ago
ddoolin|1 year ago
sauljp|1 year ago
This is the drawback: you have to generate the descriptor first.
1) download the schema from schema registry: http :8081/schemas/ids/<my-id>/schema
2) generate the descriptor with the schema: protoc --include_imports --descriptor_set_out=my-descriptor.desc --proto_path=. my-schema.proto
3) use kaskade: kaskade consumer -b my-kafka:9092 -x auto.offset.reset=earliest -k string -v protobuf -t my-protobuf-topic -p descriptor=my-descriptor.desc -p value=mypackage.MyMessage
lordgrenville|1 year ago
sauljp|1 year ago
1bent|1 year ago
https://en.wikipedia.org/wiki/Apache_Kafka
Huntsecker|1 year ago
westurner|1 year ago
(Edit)
Kaskade models.py, consumer.py https://github.com/sauljabin/kaskade/blob/main/kaskade/model...
kombu/transport/confluentkafka.py: https://github.com/celery/kombu/blob/main/kombu/transport/co...
confluent-kafka-python wraps librdkafka with binary wheels: https://github.com/confluentinc/confluent-kafka-python
librdkafka: https://github.com/edenhill/librdkafka
unknown|1 year ago
[deleted]
sauljp|1 year ago
brew install kaskade
pipx install kaskade
terminaltrove|1 year ago
https://terminaltrove.com/kaskade/
For those interested, kaskade is made with the Textual TUI framework.
https://www.textualize.io/
Thanks for making this sauljp.
potamic|1 year ago
sauljp|1 year ago
unknown|1 year ago
[deleted]