The decision to roll their own in golang entirely and then reimplement rails auth in golang, vs reimplement auth in lua and lean on nginx for everything else could be examined more, feels a bit like they wanted to
do it in golang so nginx+lua was a non starter. Imagine how many crazy long tail problems nginx has already solved
There is always the benefit of building it up yourself and understanding it more deeply tho..
If Nginx+Lua had better development and testing support (back in 2016, I haven't really looked at it in a while) we might have gone that way.
It was kinda "here's a script, load it on nginx and hope it works" so it required a considerable amount of work to make it reliable and testable. Golang actually has a lot of stuff already done for you in terms of proxying so that part of the gateway isn't really that large.
There was no simple way to reload configs on the fly (other than updating the file locally with something like chef or ansible and reloading nginx on all machines) and that made it harder to provide self service routes from the get go (it's much easier nowadays with Nginx Service Mesh).
Nowadays with the options available I don't think we'd build it from scratch anymore (we could even use Zuul 2 I think). I'd say our mistake was that we didn't open source it in the beginning to benefit from more external usage and contributions, the way it is tied to internal services nowadays makes this hard to pull off right now.
> There is always the benefit of building it up yourself and understanding it more deeply tho..
That doesn't really fly though because unless you're the same coder on the same project forever, then eventually the team will have inherited someone elses code anyways. On average, might as well be OSS that's popular, well documented and has a community around it. Think about it, what would you rather inherit?
Of course, there are counter arguments. We want everything in golang because our Engineers know golang and most the CNCF toolset we use is in golang; Our usecase is simpler or specific and thus the general OSS solution would require to be tailored and that's more work than rolling our own; The OSS solution just isn't that good, and we think we can do better and have a competitive advantage by doing it internally; loads really.
Understanding it better though? On average seems like short term thinking.
I’ve built an api gateway in go at my last company and my current one, so I’m actually a fan of the pattern. Usually the decision revolves around complex auth and permissioning rules that already exist in go, and would be a pain to rewrite. It also allows for custom multiplexing of requests to hit multiple downstream APIs. It also makes monitoring simpler, since you can have a view of your entire platform by just monitoring a single service. A reverse proxy in go is dead simple to write and maintain, especially if it matches the rest of your stack.
How flexible is haproxy though? We recently started writing our own API gateway because we needed tight integration to existing services, even support some specific legacy authentication schemes.
The article touches on NGINX and Lua for API gateways, and that's an approach I like very much. The NGINX website has a bunch of well-written training posts about microservices and gateways.
Claim: I am the core developer of APISIX, and I am also the core developer of OpenResty (known as Nginx + Lua). I have written Go for several years and I have contributed to github.com/golang/go.
We have done some benchmarks around APISIX, Kong, Tyk. APISIX is the fastest (APISIX > Kong > Tyk).
All of them have rich features and fantasy GUI.
If people really care about the performance, it would be good to consider APISIX. Remember to benchmark every candidates in your own environment, even if you do not benchmark them correctly (every vendor complains others can't do benchmark correctly, some even don't allow others to do benchmark[1][2]), it is the way you use them in the production.
Advertisement time is over. Let's talk something irrelevant to my employer.
If you care about performance, forget about writing a plugin with a guest language. A guest language is a language doesn't supported by the gateway natively, like Go in Kong and Lua in Tyk. The performance waste in ctx serialization and IPC are huge. I have seen these complains for more than one.
> Imagine how many crazy long tail problems nginx has already solved
There is a problem I believe that API Gateways based on Go can't solve it unless Go have made its GC as good as Java's. Some people have consulted with me about replacing their Go implementation to a Nginx or Envoy one because this problem.
Very interesting article. I do believe with better documentation they probably could have been less hands on with onboarding the “customers”.
I am currently creating a cloud based API gateway for primary SMBs. The current gateways are quite complex to implement, and for the simpler use cases I believe it can be done better.
We even had to expose some of the Ruby code, mostly authorization policies, as a GRPC service so people wouldn’t have to rewrite all the policies they already had in place themselves.
This is an area of interest and learning for me. A pattern of use for gateways sometimes is to provide id/auth and access the backing services with a high priv service account. This can’t be 100% of the time because some applications perform their own authorization at the data level, as in User X can see data A but not B, or in service Q user type X can crud attributes a thru d but only user type Y can enter attribute e.
Not very knowledgable on the topic, but aren't API gateways like Amazon's and this one essentially vendor lock-in tools? Can you migrate your system across this type of gateways easily when you want to switch your vendor?
Generically any use of a vendor represents some amount of vendor lock-in since the reason why the vendor was selected often reflects some unique capability or added value of the vendor.
[+] [-] nhoughto|5 years ago|reply
There is always the benefit of building it up yourself and understanding it more deeply tho..
[+] [-] mlinhares|5 years ago|reply
It was kinda "here's a script, load it on nginx and hope it works" so it required a considerable amount of work to make it reliable and testable. Golang actually has a lot of stuff already done for you in terms of proxying so that part of the gateway isn't really that large.
There was no simple way to reload configs on the fly (other than updating the file locally with something like chef or ansible and reloading nginx on all machines) and that made it harder to provide self service routes from the get go (it's much easier nowadays with Nginx Service Mesh).
Nowadays with the options available I don't think we'd build it from scratch anymore (we could even use Zuul 2 I think). I'd say our mistake was that we didn't open source it in the beginning to benefit from more external usage and contributions, the way it is tied to internal services nowadays makes this hard to pull off right now.
[+] [-] ownagefool|5 years ago|reply
That doesn't really fly though because unless you're the same coder on the same project forever, then eventually the team will have inherited someone elses code anyways. On average, might as well be OSS that's popular, well documented and has a community around it. Think about it, what would you rather inherit?
Of course, there are counter arguments. We want everything in golang because our Engineers know golang and most the CNCF toolset we use is in golang; Our usecase is simpler or specific and thus the general OSS solution would require to be tailored and that's more work than rolling our own; The OSS solution just isn't that good, and we think we can do better and have a competitive advantage by doing it internally; loads really.
Understanding it better though? On average seems like short term thinking.
[+] [-] spyspy|5 years ago|reply
[+] [-] sgt|5 years ago|reply
[+] [-] xorcist|5 years ago|reply
The next step would be to make it dynamically configurable at runtime. Then there needs to be monitoring. And some tools for troubleshooting.
There's something to be said for battle hardened tools.
[+] [-] digianarchist|5 years ago|reply
Like Caddy? https://caddyserver.com/
[+] [-] sgt|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] jph|5 years ago|reply
https://www.nginx.com/resources/library/designing-deploying-...
[+] [-] sriku|5 years ago|reply
[+] [-] spacewander|5 years ago|reply
Claim: I am the core developer of APISIX, and I am also the core developer of OpenResty (known as Nginx + Lua). I have written Go for several years and I have contributed to github.com/golang/go.
We have done some benchmarks around APISIX, Kong, Tyk. APISIX is the fastest (APISIX > Kong > Tyk). All of them have rich features and fantasy GUI.
If people really care about the performance, it would be good to consider APISIX. Remember to benchmark every candidates in your own environment, even if you do not benchmark them correctly (every vendor complains others can't do benchmark correctly, some even don't allow others to do benchmark[1][2]), it is the way you use them in the production.
Advertisement time is over. Let's talk something irrelevant to my employer.
If you care about performance, forget about writing a plugin with a guest language. A guest language is a language doesn't supported by the gateway natively, like Go in Kong and Lua in Tyk. The performance waste in ctx serialization and IPC are huge. I have seen these complains for more than one.
> Imagine how many crazy long tail problems nginx has already solved
There is a problem I believe that API Gateways based on Go can't solve it unless Go have made its GC as good as Java's. Some people have consulted with me about replacing their Go implementation to a Nginx or Envoy one because this problem.
[1]: https://www.brentozar.com/archive/2018/05/the-dewitt-clause-...
[2]: https://konghq.com/evaluation-agreement/ Read the 1.5(e).
[+] [-] PhilipA|5 years ago|reply
I am currently creating a cloud based API gateway for primary SMBs. The current gateways are quite complex to implement, and for the simpler use cases I believe it can be done better.
[+] [-] adolph|5 years ago|reply
This is an area of interest and learning for me. A pattern of use for gateways sometimes is to provide id/auth and access the backing services with a high priv service account. This can’t be 100% of the time because some applications perform their own authorization at the data level, as in User X can see data A but not B, or in service Q user type X can crud attributes a thru d but only user type Y can enter attribute e.
What other patterns do people see?
[+] [-] mojuba|5 years ago|reply
[+] [-] adolph|5 years ago|reply
[+] [-] djm_|5 years ago|reply
[+] [-] rossmohax|5 years ago|reply