This is an interesting article but there's a dichotomy here that isn't discussed that I think is important. Part of the appeal of an API gateway is that it facilitates bolting on solutions to a ton of common crosscutting concerns that show up between external-to-internal callers. Purely internal-internal system communication isn't quite the same and the set of concerns that affects each is different. Authentication and authorization are not necessarily concerns for system internal calls, for instance, neither is endpoint stability (at least, not in the same way). This article blurs the line between the two.
Strongly disagree that “internal” interfaces have lesser access control or (to a degree) stability concerns. Any successful system will, sooner rather than later, have the same concerns as those “public” use cases.
Coming from inside AWS the fact that we build on the same primitives customers is an advantage. When my service calls EC2 APIs its with the same IAM Principals, Policies, and evaluation that EC2 uses to authorize any other request. Similarly inside my service different components or microservices have their own Principals, and can define access capabilities just as we expect customers to do. That means we’ll find sharp edges or gaps; but those drive improvements for us that solve the same problems our customers will get to sooner or later.
I’m not so sure we won’t be talking about endpoint stability in five to eight years. Once we’ve figured out how to get high abailability at the ingress, why not extend that down the stack?
Why do I need circuit breakers but the guy calling my service doesn’t?
It's the same article. I think there is a misconception that mesh will replace gateways but as mentioned in earlier comments...it's really two different types of traffic patterns with north/south and east/west so we'll need both.
A feature not mentioned is translating between HTTP/REST to other protocols. GCP Endpoints (https://cloud.google.com/endpoints/) has this functionality for gRPC and gRPC-web (additional proxy on top of gRPC). I'd love to learn about similar features on AWS's API Gateway.
You should think more about: api management (as selling API to someone) and as a security enforcement point to make sure some rules are applied, always!
Those simple api gateways are replaced or will be replaced, that's probably true.
I’m trying to convince my team to move some stats to the ingress server to reduce duplicate effort going forward, and also just to have apples to apples comparison for some metrics.
We spend a lot of time trying to remove choke points from our code but the fact is is that when you need to enforce policy, the checkpoint is the only place you can safely put it.
I’ve only recently begun my Kubernetes voyage of self study and knowledge acquisition (as it’s becoming more of a demand in job requirements so I’ve begun studying) so pardon the ignorance, though that’s what motivates the question:
Is k8s an indicator of that convergence happening or do I misunderstand this tooling?
> One thing to note: we want to be careful not to allow any business logic into this layer.
I can’t agree. But I also include operational concerns under the umbrella of “business logic”, and the only examples I can think of where I would break that rule with relish relate to ops. From audit trails to security.
[+] [-] revel|6 years ago|reply
[+] [-] donavanm|6 years ago|reply
Coming from inside AWS the fact that we build on the same primitives customers is an advantage. When my service calls EC2 APIs its with the same IAM Principals, Policies, and evaluation that EC2 uses to authorize any other request. Similarly inside my service different components or microservices have their own Principals, and can define access capabilities just as we expect customers to do. That means we’ll find sharp edges or gaps; but those drive improvements for us that solve the same problems our customers will get to sooner or later.
[+] [-] hinkley|6 years ago|reply
Why do I need circuit breakers but the guy calling my service doesn’t?
[+] [-] sytse|6 years ago|reply
An API gateway can mean API Management, Cluster ingress, an API Gateway pattern, or a Service mesh.
API Management will still be needed, even with a service mesh. And I don’t think the article author thinks they will be replaced.
[+] [-] cloudytoday|6 years ago|reply
[+] [-] graphememes|6 years ago|reply
The service mesh still has ingress needs. North / South.
Internally the service mesh has interconnectivity communication needs. East / West.
You will still need a gateway / load balancer.
[+] [-] streetcat1|6 years ago|reply
[+] [-] alephnan|6 years ago|reply
[+] [-] nine_k|6 years ago|reply
[+] [-] Bombthecat|6 years ago|reply
Those simple api gateways are replaced or will be replaced, that's probably true.
[+] [-] hinkley|6 years ago|reply
We spend a lot of time trying to remove choke points from our code but the fact is is that when you need to enforce policy, the checkpoint is the only place you can safely put it.
It’s a tradeoff, to be sure.
[+] [-] PaulHoule|6 years ago|reply
[+] [-] orliesaurus|6 years ago|reply
[+] [-] tyingq|6 years ago|reply
That all 3 spaces converge seems inevitable.
[+] [-] dvtrn|6 years ago|reply
Is k8s an indicator of that convergence happening or do I misunderstand this tooling?
[+] [-] hinkley|6 years ago|reply
I can’t agree. But I also include operational concerns under the umbrella of “business logic”, and the only examples I can think of where I would break that rule with relish relate to ops. From audit trails to security.
[+] [-] birdyrooster|6 years ago|reply
[deleted]