top | item 29301154

(no title)

curryst | 4 years ago

I've worked at some places that used Kafka (including LinkedIn), although I have never been responsible for running the platform itself. I'll chip in with what I see as the negatives.

Kafka sits at roughly the same tier as HTTP, but lacks a lot of the convention we have around HTTP. There's a lot of convention around HTTP that allows people to build generic tooling for any apps that use HTTP. Think visibility, metrics, logging, etc, etc. Those are all things you effectively get for free with HTTP in most languages. Afaict, most of that doesn't exist for Kafka in a terribly helpful. You can absolutely build something that will do distributed tracing for Kafka messages, but I'm not aware of a plug-and-play version like there are for most languages.

The fact that Kafka messages are effectively stateless (in the UDP sense, not the application sense) also trips up a lot of people. If you want to publish a message, and you care what happens to that message downstream, things get complicated. I've seen people do RPC over event buses where they actually want a response back, and it became this complicated system of creating new topics so the host that sent the request would get the response back. Again, in HTTP land, you'd just slap a loadbalancer in front of the app and be done. HTTP is stateful, and lends itself to stateful connections.

Another issues it that when you tell people that they can adjust their schema more often, they tend to go nuts. Schemas start changing left and right, and suddenly you now need a product to orchestrate these schema changes and ensuring you're using the right parser for the right message. Schema validation starts to become a significant hurdle.

It's also architecturally complicated to replace HTTP. An HTTP app can be just a single daemon, or a few daemons with a load balancer or two in front. Kafka is, at minimum, your app, a Kafka daemon, and a Zookeeper daemon (nb I'm not entirely sure Zookeeper is still required). You also have to deal with eventual consistency, which can make coding and reasoning about bugs dramatically harder than it needs to be. What happens when Kafka double-delivers a message?

My pitch is always that you shouldn't use Kafka unless it becomes architecturally simpler than the alternatives. There are problems to which Kafka is a better solution than HTTP, but they don't start with unstable schemas or databases being difficult. Huge volumes of data is a good reason to me, not being sure what your downstreams might be is an option. There are probably more, I'm not an expert.

> our customers don't understand the data they're shoving at us. But Kafka will take care of all of that for us

Kafka isn't going to help with this at all. If your HTTP app can't parse it, neither will your Kafka app. Kafka does have the ability to do replays, but so does shoving the requests in S3 or a databases for processing later. I promise you that "SELECT * FROM requests WHERE status='failed'" is drastically simpler than any Kafka alternative. It is neat that Kafka lets you "roll back time" like that, but you have to very carefully consider the prospect of re-processing the messages that already succeeded. It's very easy to get a bug where you have double entries in databases or other APIs because you're reprocessing a request.

discuss

order

EamonnMR|4 years ago

All very good points. What I like about Kafka is that you can queue up a bunch of messages without needing to be able to handle that load immediately. It lets you build very resistant patterns: if your message-senders overwhelm your message receivers in HTTP you can end up with connection failures, get stuck waiting, etc. In Kafka what happens is you now have a large backlog to work through, but at least your messages are somewhere accessible to you and not dropped on the floor.

HTTP definitely has the edge when it comes to library support. In fact, Confluent et al offer HTTP endpoints for Kafka so that you don't have to deal with the vagaries of actually connecting to a broker yourself (the default timeout in python for an unresponsive broker is _criminal_ for consumers. You will spend several minutes wondering when the message will arrive.) We use an in-house one. But that introduces HTTP's problems back into the process; you need to worry about overwhelming your endpoint again...

Regarding application patterns, ideally you're writing applications that read data from one topic (or receive messages, parse a file, etc) and write to another topic. Treating it as a request that will somehow be responded to later in time scares me and I wouldn't do it. What if your application needs to be restarted while some things are in-flight?

curryst|4 years ago

> It lets you build very resistant patterns: if your message-senders overwhelm your message receivers in HTTP you can end up with connection failures, get stuck waiting, etc.

I think the biggest drawback to HTTP in this space is that there's typically no coordination between clients and the server. Clients send requests when they want and the server has to respond immediately.

That becomes a big issue when you have an outage and all your clients are in retry loops, spiking your requests per second to 3x what they would normally be, on top of whatever the actual issue is.

Most of the retry stuff seems largely shared; i.e. your code should still have handlers for when Kafka isn't responding right. Kafka will only preserve messages on the queue, it won't help if you lose network connectivity, or your ACLs get messed up, or etc, etc.

> Regarding application patterns, ideally you're writing applications that read data from one topic (or receive messages, parse a file, etc) and write to another topic. Treating it as a request that will somehow be responded to later in time scares me and I wouldn't do it. What if your application needs to be restarted while some things are in-flight?

The pattern I've seen is to make the processing itself idempotent, and only ack messages once they've been successfully processed. So if you restart the app while it's processing, the message will sit there in Kafka as claimed until it hits the ack timeout, and then Kafka will give it to a new node.

As far as RPC, I'm not advocating that it's a good idea, but you could implement timeouts and retries on top of an event bus. Edge cases will abound, and I wouldn't want to be in charge of it, but you could shove that square block into the round hole if you push hard enough.

krinchan|4 years ago

How does that still make Kafka a better choice than any of the other queueing systems out there? SQS, Redis, ActiveMQ, RabbitMQ, there are tons more queues out there that are far easier to use than Kafka.

EdwardDiego|4 years ago

> If you want to publish a message, and you care what happens to that message downstream, things get complicated.

Definitely agree. The basic concept of Kafka is that the publisher doesn't care, so long as data isn't lost. If you need the producer to redo stuff if the consumer failed, then Kafka is the square peg in your round hole.

And yeah, the best use case for Kafka is, IMO, "I have to shift terabytes or more of data daily without risking data loss, and I want to decouple consumers from producers".

Gigachad|4 years ago

Our company is currently looking in to kafka and microservices. The problem we have is that the volume of actions going on has gone past what a single rails app with sql server can handle. When I look in to it, it seems like it would mostly be used as some kind of job queue where worker microservices churn through the entries in kafka to do some kind of data processing without needing sql.

But then there are blog posts saying kafka is a terrible job queue because you can only have one worker per partition and it's hard to get more partitions dynamically.

EdwardDiego|4 years ago

Sure, you can only one have consumer in a consumer group per partition. But partitions are cheap. And it's reasonably trivial to add more partitions should you find you need more concurrent consumers.

A very basic rule of thumb is, on an X broker cluster, have N partitions, where N / X = 0.

There's no harm in choosing something like 20 - 30 partitions for a topic, and increasing that when you need to scale consumers horizontally.

Dropping partitions is harder, but again, they're cheap, you won't need to for most use cases.

Only caveat to increasing partition count is when you're relying on absolute ordering per partition - key hashing can point to different partitions when you have 10 vs 50. It can still be done, but it requires a careful approach.

EamonnMR|4 years ago

What you can do is have a really large number of partitions and scale consumers up only when needed (workers need not be 1:1 with partitions.)

abledon|4 years ago

why kafka over the other messaging options? e.g. rabbitmq/amazon sqs/azure queue storage

almeria|4 years ago

Very helpful, thanks