top | item 42800676

(no title)

oppositelock | 1 year ago

I've been building API's for a long time, using gRPC, and HTTP/REST (we'll not go into CORBA or DCOM, because I'll cry). To that end, I've open sourced a Go library for generating your clients and servers from OpenAPI specs (https://github.com/oapi-codegen/oapi-codegen).

I disagree with the way this article breaks down the options. There is no difference between OpenAPI and REST, it's a strange distinction. OpenAPI is a way of documenting the behavior of your HTTP API. You can express a RESTful API using OpenAPI, or something completely random, it's up to you. The purpose of OpenAPI is to have a schema language to describe your API for tooling to interpret, so in concept, it's similar to Protocol Buffer files that are used to specify gRPC protocols.

gRPC is an RPC mechanism for sending protos back and forth. When Google open sourced protobufs, they didn't opensource the RPC layer, called "stubby" at Google, which made protos really great. gRPC is not stubby, and it's not as awesome, but it's still very efficient at transport, and fairly easy too extend and hook into. The problem is, it's a self-contained ecosystem that isn't as robust as mainstream HTTP libraries, which give you all kinds of useful middleware like logging or auth. You'll be implementing lots of these yourself with gRPC, particularly if you are making RPC calls across services implemented in different languages.

To me, the problem with gRPC is proto files. Every client must be built against .proto files compatible with the server; it's not a discoverable protocol. With an HTTP API, you can make calls to it via curl or your own code without having the OpenAPI description, so it's a "softer" binding. This fact alone makes it easier to work with and debug.

discuss

order

mandevil|1 year ago

There is a distinction between (proper) REST and what this blog calls "OpenAPI". But the thing is, almost no one builds a true, proper REST API. In practice, everyone uses the OpenAPI approach.

The way that REST was defined by Roy Fielding in his 2000 Ph.D dissertation ("Architectural Styles and the Design of Network-based Software Architectures") it was supposed to allow a web-like exploring of all available resources. You would GET the root URL, and the 200 OK Response would provide a set of links that would allow you to traverse all available resources provided by the API (it was allowed to be hierarchical- but everything had to be accessible somewhere in the link tree). This was supposed to allow discoverability.

In practice, everywhere I've ever worked over the past two decades has just used POST resource_name/resource_id/sub_resource/sub_resource_id/mutatation_type- or PUT resource_name/resource_id/sub_resource/sub_resource_id depending on how that company handled the idempotency issues that PUT creates- with all of those being magic URL's assembled by the client with knowledge of the structure (often defined in something like Swagger/OpenAPI), lacking the link-traversal from root that was a hallmark of Fielding's original work.

Pedants (which let's face it, most of us are) will often describe what is done in practice as "RESTful" rather than "REST" just to acknowledge that they are not implementing Fielding's definition of REST.

bborud|1 year ago

I tend to prefer RESTish rather than RESTful since RESTful almost suggests attempting to implement Fielding's ideas but not quite getting there. I think the subset of approaches that try and fail to implement Fielding's ideas is an order of magnitude (or two) smaller than those who go for something that is superficially similar, but has nothing to do with HATEOAS :-).

REST is an interesting idea, but I don't think it is a practical idea. It is too hard to design tools and libraries that helps/encourages/forces the user implement HATEOAS sensibly, easily and consistently.

nicholasjarnold|1 year ago

> Pedants (which let's face it, most of us are) will often describe what is done in practice as "RESTful" rather than "REST" just to acknowledge that they are not implementing Fielding's definition of REST.

Yes, exactly. I've never actually worked with any group whom had actually implemented full REST. When working with teams on public interface definitions I've personally tended to use the so-called Richardson's Maturity Model[0] and advocated for what it calls 'Level 2', which is what I think most of us find rather canonical and principal of least surprise regarding a RESTful interface.

[0] - https://en.wikipedia.org/wiki/Richardson_Maturity_Model

physicles|1 year ago

> There is no difference between OpenAPI and REST, it's a strange distinction.

That threw me off too. What the article calls REST, I understand to be closer to HATEOAS.

> I've open sourced a Go library for generating your clients and servers from OpenAPI specs

As a maintainer of a couple pretty substantial APIs with internal and external clients, I'm really struggling to understand the workflow that starts with generating code from OpenAPI specs. Once you've filled in all those generated stubs, how can you then iterate on the API spec? The tooling will just give you more stubs that you have to manually merge in, and it'll get harder and harder to find the relevant updates as the API grows.

This is why I created an abomination that uses go/ast and friends to generate the OpenAPI spec from the code. It's not perfect, but it's a 95% solution that works with both Echo and Gin. So when we need to stand up a new endpoint and allow the front end to start coding against it ASAP, the workflow looks like this:

1. In a feature branch, define the request and response structs, and write an empty handler that parses parameters and returns an empty response.

2. Generate the docs and send them to the front end dev.

Now, most devs never have to think about how to express their API in OpenAPI. And the docs will always be perfectly in sync with the code.

plorkyeran|1 year ago

HATEOAS is just REST as originally envisioned but accepting that the REST name has come to be attached to something different.

jpc0|1 year ago

> This is why I created an abomination that uses go/ast and friends to generate the OpenAPI spec from the code

OpenAPI is a spec not documentation. Write the spec first then generate the code from the spec.

You are doing it backwards, at least in my opinion.

oppositelock|1 year ago

This comes down to your philosophical approach to API development.

If you design the API first, you can take the OpenAPI spec through code review, making the change explicit, forcing others to think about it. Breaking changes can be caught more easily. The presence of this spec allows for a lot of work to be automated, for example, request validation. In unit tests, I have automated response validation, to make sure my implementation conforms to the spec.

Iteration is quite simple, because you update your spec, which regenerates your models, but doesn't affect your implementation. It's then on you to update your implementation, that can't be automated without fancy AI.

When the spec changes follow the code changes, you have some new worries. If someone changes the schema of an API in the code and forgets to update the spec, what then? If you automate spec generation from code, what happens when you express something in code which doesn't map to something expressible in OpenAPI?

I've done both, and I've found that writing code spec-first, you end up constraining what you can do to what the spec can express, which allows you to use all kinds of off-the-shelf tooling to save you time. As a developer, my most precious resource is time, so I am willing to lose generality going with a spec-first approach to leverage the tooling.

ak217|1 year ago

In my part of the industry, a rite of passage is coming up with one's own homegrown data pipeline workflow manager/DAG execution engine.

In the OpenAPI world, the equivalent must be writing one's own OpenAPI spec generator that scans an annotated server codebase, probably bundled with a client codegen tool as well. I know I've written one (mine too was a proper abomination) and it sounds like so have a few others in this thread.

Cthulhu_|1 year ago

> Once you've filled in all those generated stubs, how can you then iterate on the API spec? The tooling will just give you more stubs that you have to manually merge in, and it'll get harder and harder to find the relevant updates as the API grows.

This is why I have never used generators to generate the API clients, only the models. Consuming a HTTP based API is just a single line function nowadays in web world, if you use e.g. react / tanstack query or write some simple utilities. The generaged clients are almost never good enough. That said, replacing the generator templates is an option in some of the generators, I've used the official openapi generator for a while which has many different generators, but I don't know if I'd recommend it because the generation is split between Java code and templates.

talideon|1 year ago

I'm scratching my head here. HATEOAS is the core of REST. Without it and the uniform interface principle, you're not doing REST. "REST" without it is charitably described as "RESTish", though I prefer the term "HTTP API". OpenAPI only exists because it turns out that developers have a very weak grasp on hypertext and indirection, but if you reframe things in a more familiar RPC-ish manner, they can understand it better as they can latch onto something they already understand: procedure calls. But it's not REST.

mkleczek|1 year ago

> This is why I created an abomination that uses go/ast and friends to generate the OpenAPI spec from the code.

This is against "interface first" principle and couples clients of your API to its implementation.

That might be OK if the only consumer of the API is your own application as in that case API is really just an internal implementation detail. But even then - once you have to support multiple versions of your own client it becomes difficult not to break them.

XorNot|1 year ago

The oapi-codegen tool the OP was put out (which I use) solves this by emitting an interface though. OpenAPI has the concept of operation names (which also have a standard pattern), so your generated code is simply implementing operation names. You can happily rewrite the entire spec and provided operation names are the same, everything will still map correctly - which solves the coupling problem.

cpursley|1 year ago

I'm piggybacking on the OpenAPI spec as well to generate a SQL-like query syntax along with generated types which makes working with any 3rd party API feel the same.

What if you could query any ole' API like this?:

  Apipe.new(GitHun) |> from("search/repositories") |> eq(:language, "elixir") |> order_by(:updated) |> limit(1) |> execute()
This way, you don't have to know about all the available gRPC functions or the 3rd party API's RESTful quirks while retaining built-in documenting and having access to types.

https://github.com/cpursley/apipe

I'm considering building a TS adapter layer so that you can just drop this into your JS/TS project like you would with Supabase:

  const { data, error } = await apipe.from('search/repositories').eq('language', 'elixir').order_by('updated').limit(1)
Where this would run through the Elixir proxy which would do the heavy lifting like async, handle rate limits, etc.

cyberax|1 year ago

> To me, the problem with gRPC is proto files. Every client must be built against .proto files compatible with the server; it's not a discoverable protocol.

That's not quite true. You can build an OpenAPI description based on JSON serialization of Protobufs and serve it via Swagger. The gRPC itself also offers built-in reflection (and a nice grpcurl utility that uses it!).

TheGoodBarn|1 year ago

Just chiming in to say we use oapi-codegen everyday and it’s phenomenal.

Migrated away from Swaggo -> oapi during a large migration to be interface first for separating out large vertical slices and it’s been a godsend.

happyweasel|1 year ago

Buggy/incomplete Openapi codegen for rust was a huge disappointment for me. At least with grpc some languages are first class citizens. Of course generated code has some uglyness. Kinda sad http2 traffic can be flaky due to bugs in network hardware.