8000 messages a day, tops? That’s 5 a minute. Does that warrant “infrastructure”? I think a gameboy’s Z80 could handle that load.
I don’t want to be dismissive, but I often see these big numbers being posted, like “14M messages” or “thousands of messages” and then adding “per year” or something, which brings it down to toy level load.
Even the first “serious” example is about “thousands of messages” per minute. Say 5K a minute. That’s 83 per second, say 100. That seems .. not that interesting?
Am I being too dismissive? I think I am. I am not seeing something right. Can anybody say something to widen my perspective?
FTA's conclusion: "If you are handling thousands of messages a day, a simple database-driven queue might be better than Kafka."
They're not trying to say "thousands of messages a day" is a lot, but rather not. Or at the very least, they're saying that at that scale, it is not significant enough to merit the complexity they were dealing with.
I think you're probably being slightly dismissive. It's not necessarily about load but various other concerns like durability, delivery latency and how failures are handled. There's a big difference between messaging and reliable messaging. I have messaging systems that take 10-20 messages a day but must deliver those messages and do it on a deadline. For that you do need infrastructure (and no that isn't a queue inside a SQL database).
> 8000 messages a day, tops? That’s 5 a minute. Does that warrant “infrastructure”? I think a gameboy’s Z80 could handle that load.
The blog post is quite clear in stating that their pain points had nothing to do with scaling or throughput. The author explicitly mentions idempotency, custom headers, and authentication.
I think you're ranting about a strawman you put up.
It seems a lot of the complaints weren't about kafka itself, but rather seemed to stem from internal communication problems. Custom kafka message headers could very well be custom http headers, and the problem is the same. Kafka is just coincidental.
Looking at the volume though, kafka is overkill. They most likely could have just used the database and reaped the benefits of doing everything in a single transaction, with easier row level locking. The post acknowledges this.
I do think it highlights the need for a small scale kafka, though. It's conceptually great to have everything work off of logs, but kafka does add a non trivial operational burden.
We've had "small-scale" Kafka for a long time. It's an append-only log, and there are a number of ways to implement it, but it's essentially that.
The thing that makes Kafka interesting is the technique of operating from a linux disk write-buffer. That's the trick that makes it fast and scale to huge volumes. But if you don't have the scale, you can stand up a table, or RabbitMQ, or anything that manages append-only ordered log entries. There doesn't need to be a new thing... Kafka was the new thing.
> Looking at the volume though, kafka is overkill.
Overkill in what sense? The blog post seems to suggest Kafka was already pervasive in their organization, and that they leveraged the existing infrastructure and simply added.a couple of topics. How is this overkill?
I used bash and netcat for a queue like this once. I stashed the on disk if the database was down and read them back if far end was down.
The think Kafka really brings to the table is trusting the pipe -- in a case where writing to disk queues would occur often enough, I would run into reliability building my own system, instead Kafka handles that type of indexing.
The main problem I have with Kafka is that their sales team is too good: at my previous employer the CIO was convinced we needed Kafka and bought a contract for sever 100k. But we already had all our events in a postgres database.
Admittedly that database had some complicated queries with lots of business logic to get a useful view on the data. But at first I hoped Kafka would somehow make this easier, but of course our particular usecase with a low event volume (hundreds per day), high latency tolerance (next day reporting was considered good enough), highly complex business logic (various computations that required knowledge of what was done previously) all made Kafka just about the least suitable tool for the job.
Of course the contract was already signed (I was naturally never consulted up front), so this resulted in lots of solution looking for a problem. No suitable problem was found so I ended up leaving enterprise world for a scale-up and the CIO is still doing whatever he wants for god knows why
I've worked at two large companies now with a mature managed Kafka offerings. The 'platform' engineering team handles all of the engineering, implementation, security and compliance, upgrades, observability etc. and have self-service onboarding with lots of recipes and sample integrations. My team moves about 5B messages a day through two topics and we're not putting a dent in the overall volume. It just enables use to move so much more quickly than we would if we had to deal with all of that ourselves.
So in our case it's clearly not an anti-pattern, but the right tool for the job.
The fact that you had an entire team operating it is the key, I think. I've seen various big buzzwordy techs used at various shops I've worked at and whether it was nice to work with totally came down to whether there was a team behind it operating it for us.
K8s with a k8s team running it - fabulous. Without a team and everyone kind of just needs to know enough to get by, except for the one guy who set it up? Dreadful.
Airflow when there's a team running it? Great. Luigi when it's just you, another dude, and the one guy who set it up? Not great.
Even RDS is like eh. Still had RDS tip over and we had to do manual vacuuming or something with compacting tables with dead tuples. Annoying when we were paying for RDS to ostensibly not have to do this.
The way I think about this kind of problem is to remember that tools built to deal with huge scaling problems are generally dealing with a very complex set of variables. The tool is going to be designed to let you choose between all of those variables. There's no magic - just configuration whose complexity better matches that of your problem.
That being said, if you are not yet in a situation as complex as the one your tool is designed to deal with, there is a very good chance you will waste some time starting to use such a tool "early." You might get that time back later when you scale, you might have the right people to set up the complex tool the right way for your simple situation, but you are taking a bit of a risk. As long as you go into the situation with your eyes open I think most people end up ok. The horror stories almost always come from people who are working to fulfill needs they do not have and don't understand why their work isn't giving good ROI.
Immediately starts to doubt OP's assumption/implementation of the "where it works great"
Jokes aside, agree with others. For the 7500/day, I would just push these into an S3/minio folder. And then dequeue 100 or N once every 10/30/60/T seconds. play around for the right N,T.
Then again am sure there maybe reasons/context/constraints unaware to us.
Eg
- the ingestion is spiky, with the possibility of all 7500 in a few secs/minute, you would want to first make sure that the http traffic can scale before getting to the point where it can actually connect and push to the queue
Possible reason#2
- an intern who just finished up their first Kafka task; just got freed up
#3
- or this was the only infra available and a choice had to be made with the time available at hand
#4
- or this was an experiment to see for yourself
A majority may not agree with your view, but none of us really are in your shoes. So I applaud you for sharing your thoughts anyway.
Seems like a lot of what I read about Kafka really makes it sound like using it is quite, well, Kafkaesque
Why do so many engineers end up having such a struggle with an event sourcing system, yet the system itself remains highly popular I don’t know. I theorize the following:
- Its flexible enough to do things like receive events (messages) and sending downstream events from those received
- it can ingest events fast. A well tuned instance is very fast and can handle a lot of volume
- it often allows a middleware log point (or other work types) for things happening throughout your whole system
Perhaps all of these things (and more) are hard to attain using a different technology
The simple approach mentioned in the article gets annoying if you have micro-services that don't share a DB. You could add a shared DB or a nosql DB but then you may as well just add Kafka. Of course the key question then shouldn't be kafka or not-kafka but if you over engineered on micro-services.
> Why do so many engineers end up having such a struggle with an event sourcing system, yet the system itself remains highly popular I don’t know.
It's the mental model that's simple: some services write down what happened, other services 'do their own thing' with that information.
I can write the core of the system in 2022, with events like 'Joe wants to buy a bike', 'Joe owes us $200', 'Joe paid us $200', 'Send Joe the bike', etc.
In 2023 I want to build the book-keeping service, in 2024 I want to build the inventory management system, and in 2025 I want to hook it up to a CRM and see if we can try to sell some bike parts to Joe.
Why couldn't I just use REST for that? Because the recipients of the rest calls didn't exist yet.
The bad part of Kafka (in my opinion) is how opinionated the consumer logic is (oftentimes by necessity, because of the whole distributed system thing). Sometimes I just wanna ask "what offset are you up to?", but end up in API hell, and am unable to do it.
This is what I've been starting to think about on a more abstract level: Introducing a new technology, a new system into a design isn't like putting a piece into a jigsaw puzzle, or even worse, trying to mold and force the system to fit whatever hole your design has. Many more specialized systems - and Kafka is one of them - should solve some problem, but they should also change your mental model of the system and you should look for the easiest way to introduce these heavy hitters.
For example, if you use Kafka or streaming solutions like Flink or Spark, you should change your mental model to (possibly large), (possibly resplayable) streams of events and look for simple ways to get these event streams going and good ways to consume them. And then you need to let the design push you where it wants you to go.
Like, at work, we recently had a discussion how it was so storage-expensive for a project to store all events of a day and how the query to count all of these events per tenant was taking so long. While they are using a streaming event processor in front of it. Like, what the hell - think in streams, tally up these events on the fly and persist that every hour?
Anything is deceptively deep if you understanding never goes beyond skin deep.
Also Avro is great but like Kafka you were probably holding it wrong.
I do prefer Protobuf in these particular scenarios as Protobufs features more closely align with svc <-> svc RPC style communication patterns while Avro shines in longer lived scenarios where messages need to be archived and you don't want to come up with your own framing for your Protobufs.
This is because Avro has the Avro Object Container Format which is a simple block based file format, which allows for relatively efficient seeking, block based compression etc. Protobuf unfortunately doesn't define any standard file formats or even wire protocol framing. If you need to do more than simply store and scan/read in bulk you might want to use Parquet instead though.
Reading this blog post was probably a waste of my time, hopefully this comment actually helps someone though.
The anti-pattern here isn't Kafka, it's using Kafka for 7,500 messages/day. Making your whole system asynchronous for that level of load is the textbook definition of over-engineering.
It's not necessarily over-engineering because scaling/performance isn't the only reason to want to be doing things asynchronously. In the example given, they could want to be decoupling consumers and producers from each other.
For example, they could have one service using CDC (his DB source connector) to propagate state out to a bunch of other systems and not know which systems are subscribing for changes.
An organization could have many such systems propagating state out to many other systems, using a single distributed log system.
Sometimes you do want it to be asynchronous anyway, it just doesn't make sense to make it asychronous with a horizontally scalable distributed computing streaming platform...
I agree that Kafka is alot of machinery for a fairly limited gain. but I really dislike this whole notion of 'antipattern', as if we can look at thing and assign a decontextualized thumbs up or down. that building systems is just a matter of assembling the right patterns, and avoiding the antipatterns.
Sometimes a technology or pattern can be poisonous in a very specific way that warrants a label: when they're most alluring to those least equipped to leverage them.
Just like microservices, you cross the activation energy to want Kafka very easily because it's appealing on résumés, sounds like a hedge against scale,etc.
But there's a huge asymmetry in understanding the drawbacks to them. When you spin up these systems, the drawbacks don't hit you immediately, it feels like they're solving the problem you had, and it's not until you've invested immense amounts of sweat capital (and literal capital) that you discover how badly you screwed up.
You need some way to match that low effort value prop with a low friction warning: this is not a panacea for your problems. It only seems simple, it's not simple, it will hurt you unless know it will hurt you and simply have the resources and scale to play through that hurt.
—
To me that warning is what's implied by "antipattern", it's not never use this, it's never use this unless you know why you should never use this.
I agree completely. Kafka, and event based architectures, are highly overused, but are also the right thing sometimes. It's FAR easier to manage an architecture where systems call into the source of truth for a given piece of data via APIs, and you should only switch to an event model if you truly need to.
This was not explicitly addressed in the post, but the big "Kafka antipattern" out there is building "microservice infrastructure" and using a stateful message broker between services where you should be using RPC/look-aside load balancing with deadlines and retries.
Some morons even write books and blog posts about this. The funny thing is this sort of shit is done in the name of scale, but the big folks never operate this way. Large scale infrastructures actively disdain keeping buffers and state in the middle of the request flow. They cannot afford the cost and latency of such systems. They do it the sane way[1].
Whilst Kafka isn’t a good fit for everything it genuinely sounds like these issues were organisational, not tech and no stack would have suffered the same frustrations.
The main struggle I have with Kafka is managing partitioning to avoid hotspots.
I have a scenario where we have hundreds of installs of an old and shitty RDBMS on customer sites, and we need to replicate changes to data to a central store. We had to come up with a bespoke event system that would capture insert, update, and delete events, ship them to a REST endpoint, who would then throw them into Kafka to be processed into the central store. Kafka’s ordered message log made it ideal for this scenario, as we can’t play events out of order (although because of poor design in the old databases, it sometimes happens and we built a retry system using additional Kafka topics, nonetheless avoiding out of order messages is critical to keep consumer lag under control).
This works mostly ok, but we have a problem when individual customers have big bursts of traffic. Ultimately, we need records to be processed in the order they happen, per customer. Naively, we could partition by customer ID, but arbitrarily adding new partitions as we add customers is not practical over time, and regardless, bulk inserts, updates, etc. could cause large amounts of latency for a customer. So, we’re doing a balancing act of trying to partition using customer ID + a “bundle name” of related tables (the net effect being activity to dependent tables for the same customer always go to the same partition and thus process in order). We’re also looking at using additional topics to create high, medium, and low priority queues, but while that may smooth out some of the problems, it really only breaks the original problem into three smaller versions of the same problem, effectively kicking the can down the road.
Ultimately, the best solution would be to get rid of the crappy RDBMS and replace with something that we can binlog or otherwise sync transactionally rather than record by record. We are working on this, but it’s slow going. In the meantime, we continue to wrestle with Kafka partitioning woes.
As an aside, we also got rid of Avro. It just didn’t have any benefits that outweighed the challenges to get it and keep it working over time. Much easier to just use plain json, a common message class library between consumers and producers, and a fast, traditional json library. I’ll fully admit that perhaps the avro woes are more an issue inexperience, but I seem to find more people who have the same experience as me than not. Either way, plain json has not caused us any problems.
Kafka will be trash when used as a message bus which is exactly what author had experienced. It can work but it’s designed for stream processing not messaging so it will always be inferior when used this way
Curious what language the OP and their team was using to integrate with Avro. Binary serialization can be a bit awkward, but Avro is a very stable API and it isn't difficult to find/ and or build abstractions to work with it. Perhaps I am spoiled coming from a Clojure perspective?
[+] [-] SanderNL|2 years ago|reply
I don’t want to be dismissive, but I often see these big numbers being posted, like “14M messages” or “thousands of messages” and then adding “per year” or something, which brings it down to toy level load.
Even the first “serious” example is about “thousands of messages” per minute. Say 5K a minute. That’s 83 per second, say 100. That seems .. not that interesting?
Am I being too dismissive? I think I am. I am not seeing something right. Can anybody say something to widen my perspective?
[+] [-] Zandikar|2 years ago|reply
They're not trying to say "thousands of messages a day" is a lot, but rather not. Or at the very least, they're saying that at that scale, it is not significant enough to merit the complexity they were dealing with.
[+] [-] baz00|2 years ago|reply
[+] [-] rewmie|2 years ago|reply
The blog post is quite clear in stating that their pain points had nothing to do with scaling or throughput. The author explicitly mentions idempotency, custom headers, and authentication.
I think you're ranting about a strawman you put up.
[+] [-] wayfinder|2 years ago|reply
My busy discussion forum built in PHP running on a toaster of a server was handling way more load than that 15 years ago.
[+] [-] chrsig|2 years ago|reply
Looking at the volume though, kafka is overkill. They most likely could have just used the database and reaped the benefits of doing everything in a single transaction, with easier row level locking. The post acknowledges this.
I do think it highlights the need for a small scale kafka, though. It's conceptually great to have everything work off of logs, but kafka does add a non trivial operational burden.
[+] [-] slowmovintarget|2 years ago|reply
The thing that makes Kafka interesting is the technique of operating from a linux disk write-buffer. That's the trick that makes it fast and scale to huge volumes. But if you don't have the scale, you can stand up a table, or RabbitMQ, or anything that manages append-only ordered log entries. There doesn't need to be a new thing... Kafka was the new thing.
[+] [-] rewmie|2 years ago|reply
Overkill in what sense? The blog post seems to suggest Kafka was already pervasive in their organization, and that they leveraged the existing infrastructure and simply added.a couple of topics. How is this overkill?
[+] [-] imachine1980_|2 years ago|reply
does something like that exist ???
[+] [-] iamwpj|2 years ago|reply
The think Kafka really brings to the table is trusting the pipe -- in a case where writing to disk queues would occur often enough, I would run into reliability building my own system, instead Kafka handles that type of indexing.
[+] [-] tda|2 years ago|reply
Admittedly that database had some complicated queries with lots of business logic to get a useful view on the data. But at first I hoped Kafka would somehow make this easier, but of course our particular usecase with a low event volume (hundreds per day), high latency tolerance (next day reporting was considered good enough), highly complex business logic (various computations that required knowledge of what was done previously) all made Kafka just about the least suitable tool for the job.
Of course the contract was already signed (I was naturally never consulted up front), so this resulted in lots of solution looking for a problem. No suitable problem was found so I ended up leaving enterprise world for a scale-up and the CIO is still doing whatever he wants for god knows why
[+] [-] jcims|2 years ago|reply
So in our case it's clearly not an anti-pattern, but the right tool for the job.
[+] [-] atomicnumber3|2 years ago|reply
K8s with a k8s team running it - fabulous. Without a team and everyone kind of just needs to know enough to get by, except for the one guy who set it up? Dreadful.
Airflow when there's a team running it? Great. Luigi when it's just you, another dude, and the one guy who set it up? Not great.
Even RDS is like eh. Still had RDS tip over and we had to do manual vacuuming or something with compacting tables with dead tuples. Annoying when we were paying for RDS to ostensibly not have to do this.
[+] [-] browningstreet|2 years ago|reply
[+] [-] aeturnum|2 years ago|reply
That being said, if you are not yet in a situation as complex as the one your tool is designed to deal with, there is a very good chance you will waste some time starting to use such a tool "early." You might get that time back later when you scale, you might have the right people to set up the complex tool the right way for your simple situation, but you are taking a bit of a risk. As long as you go into the situation with your eyes open I think most people end up ok. The horror stories almost always come from people who are working to fulfill needs they do not have and don't understand why their work isn't giving good ROI.
[+] [-] bosky101|2 years ago|reply
Then again am sure there maybe reasons/context/constraints unaware to us.
Eg - the ingestion is spiky, with the possibility of all 7500 in a few secs/minute, you would want to first make sure that the http traffic can scale before getting to the point where it can actually connect and push to the queue
Possible reason#2 - an intern who just finished up their first Kafka task; just got freed up
#3 - or this was the only infra available and a choice had to be made with the time available at hand
#4 - or this was an experiment to see for yourself
A majority may not agree with your view, but none of us really are in your shoes. So I applaud you for sharing your thoughts anyway.
[+] [-] no_wizard|2 years ago|reply
Why do so many engineers end up having such a struggle with an event sourcing system, yet the system itself remains highly popular I don’t know. I theorize the following:
- Its flexible enough to do things like receive events (messages) and sending downstream events from those received
- it can ingest events fast. A well tuned instance is very fast and can handle a lot of volume
- it often allows a middleware log point (or other work types) for things happening throughout your whole system
Perhaps all of these things (and more) are hard to attain using a different technology
[+] [-] marcinzm|2 years ago|reply
[+] [-] mrkeen|2 years ago|reply
It's the mental model that's simple: some services write down what happened, other services 'do their own thing' with that information.
I can write the core of the system in 2022, with events like 'Joe wants to buy a bike', 'Joe owes us $200', 'Joe paid us $200', 'Send Joe the bike', etc.
In 2023 I want to build the book-keeping service, in 2024 I want to build the inventory management system, and in 2025 I want to hook it up to a CRM and see if we can try to sell some bike parts to Joe.
Why couldn't I just use REST for that? Because the recipients of the rest calls didn't exist yet.
The bad part of Kafka (in my opinion) is how opinionated the consumer logic is (oftentimes by necessity, because of the whole distributed system thing). Sometimes I just wanna ask "what offset are you up to?", but end up in API hell, and am unable to do it.
[+] [-] quickthrower2|2 years ago|reply
[+] [-] tetha|2 years ago|reply
For example, if you use Kafka or streaming solutions like Flink or Spark, you should change your mental model to (possibly large), (possibly resplayable) streams of events and look for simple ways to get these event streams going and good ways to consume them. And then you need to let the design push you where it wants you to go.
Like, at work, we recently had a discussion how it was so storage-expensive for a project to store all events of a day and how the query to count all of these events per tenant was taking so long. While they are using a streaming event processor in front of it. Like, what the hell - think in streams, tally up these events on the fly and persist that every hour?
[+] [-] jpgvm|2 years ago|reply
Also Avro is great but like Kafka you were probably holding it wrong.
I do prefer Protobuf in these particular scenarios as Protobufs features more closely align with svc <-> svc RPC style communication patterns while Avro shines in longer lived scenarios where messages need to be archived and you don't want to come up with your own framing for your Protobufs.
This is because Avro has the Avro Object Container Format which is a simple block based file format, which allows for relatively efficient seeking, block based compression etc. Protobuf unfortunately doesn't define any standard file formats or even wire protocol framing. If you need to do more than simply store and scan/read in bulk you might want to use Parquet instead though.
Reading this blog post was probably a waste of my time, hopefully this comment actually helps someone though.
[+] [-] safetytrick|2 years ago|reply
[+] [-] davewritescode|2 years ago|reply
[+] [-] andrewmutz|2 years ago|reply
For example, they could have one service using CDC (his DB source connector) to propagate state out to a bunch of other systems and not know which systems are subscribing for changes.
An organization could have many such systems propagating state out to many other systems, using a single distributed log system.
[+] [-] pjscott|2 years ago|reply
[+] [-] cbsmith|2 years ago|reply
[+] [-] lallysingh|2 years ago|reply
Ex:
[+] [-] Philip-J-Fry|2 years ago|reply
[+] [-] convolvatron|2 years ago|reply
[+] [-] BoorishBears|2 years ago|reply
Just like microservices, you cross the activation energy to want Kafka very easily because it's appealing on résumés, sounds like a hedge against scale,etc.
But there's a huge asymmetry in understanding the drawbacks to them. When you spin up these systems, the drawbacks don't hit you immediately, it feels like they're solving the problem you had, and it's not until you've invested immense amounts of sweat capital (and literal capital) that you discover how badly you screwed up.
You need some way to match that low effort value prop with a low friction warning: this is not a panacea for your problems. It only seems simple, it's not simple, it will hurt you unless know it will hurt you and simply have the resources and scale to play through that hurt.
—
To me that warning is what's implied by "antipattern", it's not never use this, it's never use this unless you know why you should never use this.
[+] [-] jshen|2 years ago|reply
[+] [-] tgma|2 years ago|reply
Some morons even write books and blog posts about this. The funny thing is this sort of shit is done in the name of scale, but the big folks never operate this way. Large scale infrastructures actively disdain keeping buffers and state in the middle of the request flow. They cannot afford the cost and latency of such systems. They do it the sane way[1].
[1] https://www.usenix.org/conference/osdi23/presentation/saokar
[+] [-] FridgeSeal|2 years ago|reply
[+] [-] moribvndvs|2 years ago|reply
I have a scenario where we have hundreds of installs of an old and shitty RDBMS on customer sites, and we need to replicate changes to data to a central store. We had to come up with a bespoke event system that would capture insert, update, and delete events, ship them to a REST endpoint, who would then throw them into Kafka to be processed into the central store. Kafka’s ordered message log made it ideal for this scenario, as we can’t play events out of order (although because of poor design in the old databases, it sometimes happens and we built a retry system using additional Kafka topics, nonetheless avoiding out of order messages is critical to keep consumer lag under control).
This works mostly ok, but we have a problem when individual customers have big bursts of traffic. Ultimately, we need records to be processed in the order they happen, per customer. Naively, we could partition by customer ID, but arbitrarily adding new partitions as we add customers is not practical over time, and regardless, bulk inserts, updates, etc. could cause large amounts of latency for a customer. So, we’re doing a balancing act of trying to partition using customer ID + a “bundle name” of related tables (the net effect being activity to dependent tables for the same customer always go to the same partition and thus process in order). We’re also looking at using additional topics to create high, medium, and low priority queues, but while that may smooth out some of the problems, it really only breaks the original problem into three smaller versions of the same problem, effectively kicking the can down the road.
Ultimately, the best solution would be to get rid of the crappy RDBMS and replace with something that we can binlog or otherwise sync transactionally rather than record by record. We are working on this, but it’s slow going. In the meantime, we continue to wrestle with Kafka partitioning woes.
As an aside, we also got rid of Avro. It just didn’t have any benefits that outweighed the challenges to get it and keep it working over time. Much easier to just use plain json, a common message class library between consumers and producers, and a fast, traditional json library. I’ll fully admit that perhaps the avro woes are more an issue inexperience, but I seem to find more people who have the same experience as me than not. Either way, plain json has not caused us any problems.
[+] [-] sdfghswe|2 years ago|reply
[+] [-] dilyevsky|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] WaitWaitWha|2 years ago|reply
https://en.wikipedia.org/wiki/Apache_Kafka
[+] [-] CyberDildonics|2 years ago|reply
[+] [-] bob88jg|2 years ago|reply
[+] [-] waffletower|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]