top | item 9859825

(no title)

culo | 10 years ago

apiaxle and apiumbrella have few features actually. vulcan is not for api management and beside that quality > quantity.

discuss

order

ersoft|10 years ago

  vulcan is not for api management and beside that quality > quantity
What do you mean ? With vulcand you can implement your own middleware similar to kong. Also, it is written in go and you can use all existing go libraries.

Also, how can you gracefully reload kong if you need to add/remove/change a plugin ? With vulcand you just replace the binary, and send an USR2 signal to the running process. It will fork, wait for all connections to drain, and remove the old process

For deployment, again, you need to sync all lua files, with vulcand you just ship a compiled binary.

Kong doesn't have a notion of servers for each API, you need to forward requests to a haproxy or another load balancer for this. Also, I can see that backends are added by their DNS hostname. In order to achieve HA (backend redundancy) is there any way you can do it, assuming that nginx is caching the upstream dns values ?

About performance, I can see you are advertising about 1000 r/s using kong, and you need 3 machines for this (kong, cassandra, haproxy). I benchmarked vulcand and obtained about 12000 r/s on a more modest hardware.

sciurus|10 years ago

"What do you mean? With vulcand you can implement your own middleware similar to kong."

Nginx is not an API management layer, right? It doesn't have the necessary features, but it's extensible so someone could build them. Mashape did, and they released the result as Kong.

Vulcand is in the same position as nginx. It could be a good choice for building an API management layer on, but it doesn't provide one out of the box.

fosk|10 years ago

Just answering some of those questions below:

> How can you gracefully reload kong if you need to add/remove/change a plugin?

You can gracefully reload Kong by executing "kong reload" (http://getkong.org/docs/0.3.2/cli/#reload). It pretty much works the same way: fork, wait for existing connections to drain, and remove the old master process.

> For deployment, again, you need to sync all lua files, with vulcand you just ship a compiled binary.

Kong has a few distribution options (rpm, deb, Docker, etc) that simplify the process (http://getkong.org/download/), so unless it's being built from source, the deployment is straightforward.

> Kong doesn't have a notion of servers for each API, you need to forward requests to a haproxy or another load balancer for this. Also, I can see that backends are added by their DNS hostname. In order to achieve HA (backend redundancy) is there any way you can do it, assuming that nginx is caching the upstream dns values ?

You are correct. Kong supports either a DNS or IP address for a backend service, and starting from 0.5.0 it will be possible to add more DNS addresses or IPs per each API. That will make Kong work in load-balancing mode (https://github.com/Mashape/kong/issues/157).

> About performance, I can see you are advertising about 1000 r/s using kong

Kong is basically built on top of nginx (a very solid core), and it can achieve a similar performance to nginx because all the data requested from the external datastore is being cached in memory. So pretty much the latency it's going to be nginx + Lua execution overhead (which is minimal, thanks to LuaJIT). I will run a better benchmark and write a blog post soon, trying to cover both single-node and multi-datacenter setups.

> and you need 3 machines for this (kong, cassandra, haproxy)

An external datastore (Cassandra - with plans to support PostgreSQL) is required because Kong has been built to scale multi-node and multi-datacenter in order to handle pretty much every use case and plugin/middleware. In very simple single-node use cases, Cassandra can live in the same instance along with Kong. I would say that in this regard Kong is inspired by a different philosophy (start small, grow big, even supporting hybrid cloud/bare-metal setups). Having an external datastore also means that Kong could support some pretty cool plugins, like multi-datacenter service discovery, API billing coordination, multi-region health checks, etc. For example today it is possible to rate-limit requests in an eventually consistent replication fashion across multiple data-centers.