top | item 18727263

Kong gateway reaches 1.0 GA, now supports service mesh

150 points| hisham_hm | 7 years ago |konghq.com

49 comments

order
[+] wavesquid|7 years ago|reply
Hi all,

I was the lead developer on the service mesh implementation. I've just pushed https://github.com/Kong/kubernetes-sidecar-injector public, which should make deploying kong a service mesh simple on kubernetes.

Let me know if you have any questions, I'll try and check over the next few days.

[+] vijaykodam|7 years ago|reply
I see that your Kong service mesh using nginx as the underlying proxy. Are there any plans to use Envoy proxy in future? Currently Istio and Linkerd2 are based on Envoy. Unlike nginx, envoy doesnt have an enterprise version where some enterprise features are held off. Also envoy has very good observability and traffic management features.
[+] jklepatch|7 years ago|reply
I used Kong in a project recently. Very bare-bones, You need to build your own client for it. Overall, it felt like a blackbox where things magically happened. Totally hated it.
[+] fosk|7 years ago|reply
Hello, CTO of Kong here. You mean a client to consume the Admin API in order to create entities on Kong? If that's the case the community has built lots of open source clients in pretty much any language that would make the job of integrating Kong within an existing system much easier. The community around Kong has also built declarative configuration support that you can use instead of the Admin API.

Our official declarative configuration will also be released very soon, and in some environments (like Kubernetes) we already support it.

[+] bullen|7 years ago|reply
What about performance, did you use it so much you got a sense of that? To me there is nothing that beats haproxy so far.
[+] aloisbarreras|7 years ago|reply
Great work guys! I see that "Kong’s core router can now route raw TCP traffic."

Can you elaborate? What kinds of routing parameters and rules can Kong use to route raw TCP traffic?

[+] hisham_hm|7 years ago|reply
You can define a route to be either L7 ("http", "https") or L4 ("tcp", "tls"). For L7 routes you can use the usual routing parameters that were already available in Kong, like routing by host, by paths (prefix or regex-based) and/or by method. For stream routing, you can route by IP source port, IP destination port and/or by SNI in the case of TLS connections.
[+] philip1209|7 years ago|reply
Congrats! I worked for Kong's VP of Engineering a couple of years ago at OpenDNS. He's one of the best managers I've seen.
[+] justicezyx|7 years ago|reply
What would be the relevancy to this news?

And could you name a few traits of this person that make him/her stand out?

[+] itmeyou|7 years ago|reply
So I admittedly don't know much about this stuff, but what would be the difference between using Kong and the Nginx ingress controller? What advantages/improvements would I see/be able to use?
[+] kureikain|7 years ago|reply
Kong is different from Nginx ingress controller. Without a controller, you have to manually register the endpoint/service. You would want to by pass k8s service and use pod ip directly. That's why they do the kong ingress controller.

The main difference is in plugin API. It's very easy to write plugin for Kong. The second is where data is persisted. Kong stores data in db. Therefore they can do something with it. Ingress controller in 0.21 has dynamic backend, they basically hold an in memory objects for api routing rule.

Kong shines when you have complex logic routing, want to leverage their API keys authentication. Such as you can easily expose a service, with API key store in db. Where as you have to write a logs of `auth access` rule and store key in configmap/env in Nginx ingress.

Nginx ingress config is all about watching configmap/annotations and re-generate config. Such as when new service are added, the config is generated(When new pods are added/remove they use Lua to routing so no more reload in there). In Kong, these are seamless, no reload. All data are stored in either Postgres/Cassandra.

That's said, Kong is very nice but it adds more overhead than a simple Nginx ingress.

[+] mikejulietbravo|7 years ago|reply
Kong's K8s Ingress Controller lets you configure and run plugins (custom code) on your proxy traffic. This gives you a lot of power on how you’d like to route, authenticate, shape your traffic.

Nginx gives you the ability to tweak functionality, but it's not as dynamic or as easy out of the box.

[+] nphase|7 years ago|reply
How does Kong compare to Tyk?
[+] jively|7 years ago|reply
(Caveat: I’m the CEO of Tyk)

Tyk offers a more “batteries included” approach to Kong, and so doesn’t rely on external plugin authors to extend the ecosystem. 100% of our dev team are constantly working on our open source components and we like to keep it that way.

Because of that, Tyk isn’t “open core” like Kong is, there’s no lock-in or levers to get you to buy our value-adds like our Management Dashboard GUI or our Multi-Data-Center clustering add-on - you should be able to do all API Management without having to pay us a penny.

A simple example is OpenID connect support, this is a Kong enterprise plugin, with Tyk that comes as part of the normal gateway.

In terms of performance Tyk and Kong are pretty close now (Tyk pre 2.6 was slower) but we believe we now have parity, especially when switching on things like analytics, auth and rate limiting.

Tyk works very well in k8s though we don’t have a helm chart yet (coming soon).

You can also deploy Tyk as pure SaaS (fully managed), hybrid cloud (we handle back-end and control plane, you install gateways local to services) and full on-prem (install anywhere: K8s, AWS, GCP, Azure - even on Arm servers). We’re unique in that regard.

Tyk has always been separated into control-plane and operations-level components (our gatewaybis very small), so we don’t see that as something new to crow about. If you use our Dashboard, it moves the configuration and data layer out of the gateways and moves it centrally. If you use our MDCB system (enterprise) you can extend that capability across clusters in different clouds to get really targeted, distributed API governance.

There’s a bunch of other things that are different too, but they are more functional.

[+] fosk|7 years ago|reply
Hello, Marco CTO of Kong here.

Kong is arguably more popular than Tyk (and other similar gateways) when it comes to adoption (55M+ downloads and more than 70,000 instances of Kong running per day across the world), and faster when it comes to performance. BBVA - a large banking group - wrote this technical blog post a while ago comparing Kong's and Tyk performance: https://www.bbva.com/en/api-gateways-kong-vs-tyk/\

Kong OSS is 100% open source, not limited to non-commercial use.

Kong is basically a programmable runtime that can be extended with Plugins [1]. There are more than 500+ plugins that are available on GitHub that we are (slowly) adding to the official Hub, among over 5000+ contributions. You can talk to the community at https://discuss.konghq.com/

Kong is also lightweight with a lower footprint, which is required to support both traditional API gateway use cases and modern microservices environments (Kubernetes sidecar, for example). Because of that, our users are basically using one runtime for both N-S traffic (traditional API Gateway usage) and E-W traffic within a microservice oriented architecture. You can easily separate data and control planes to grow to thousands of Kong nodes running in a system.

There are users/customers running 1M+ TPS on top of distributed Kong clusters spanning across different platforms (containers, multi-cloud, even bare metal) with less than < 1ms processing latency per request. One of the reasons for this is that with Kong you can include/exclude plugins that you don't use instead of having a heavier all-in-one runtime like many gateways do.

As a result to Kong's adoption, the business is also growing very rapidly which will allow us to better deliver OSS features moving forward :) [2]

You can ping me at https://twitter.com/subnetmarco

[1] https://docs.konghq.com/hub

[2] https://konghq.com/about-kong-inc/kong-hits-record-growth-20...

[+] hrrsppd|7 years ago|reply
How does 1.0 handle database migrations?
[+] yolo42|7 years ago|reply
Kong runs at a critical point in any infrastructure. Having any sort of downtime is usually unacceptable.

To avoid downtime due to Kong upgrades, Kong now supports a blue-green deployment method where for two Kong nodes of version A and version A+1 can run together at the same time as the upgrade is being rolled out, and then switching all traffic to A+1.

[+] judithpatudith|7 years ago|reply
Kong has a migrations framework that handles them for you. If you're doing a migration because you're upgrading from an older version of Kong you'll want to do that only once the Kong nodes are upgraded.

For example, here are the instructions on upgrading to 1.0, which walk you through a couple migrations scenarios. https://discuss.konghq.com/t/kong-1-0-is-now-generally-avail...

[+] Supermighty|7 years ago|reply
What would be the best way to deploy this with my application in Kubernetes?
[+] rdli|7 years ago|reply
If you're using Kubernetes, you should check out Ambassador. Declarative syntax, no database (persists to etcd), very high performance, built on Envoy Proxy.