(no title)
mbell | 2 years ago
Here are my gripes:
1) For me one of the biggest selling points is client code gen (https://github.com/OpenAPITools/openapi-generator). Basically it sucks, or at least it sucks in enough languages to spoil it. The value prop here is define the API once, code gen the client for Ruby, Python and Scala (or insert your languages here). Often there are a half dozen clients for each language, often they are simply broken (the generated code just straight up doesn't compile). Of the ones that do work, you get random PRs accepted that impose a completely different ideological approach to how the client works. It really seems like any PR is accepted with no overarching guidance.
2) JSONSchema is too limited. We use it for a lot of things, but it just makes some things incredibly hard. This is compounded by the seemingly limitless number of version or drafts of the spec. If your goal is interop, which it probably is if you are using JSON, you have to go our and research what the lower common denominator draft spec JSONSchema support is for the various languages you want to use and limit yourself to that (probably draft 4, or draft 7).
On the pros side:
It does make pretty docs - kinda wish it would just focus on this and in the process not be as strict, I think it would be a better project.
GOATS-|2 years ago
[0]: https://github.com/RicoSuter/NSwag
[1]: https://github.com/fabien0102/openapi-codegen
dcre|2 years ago
I think it’s a pretty big problem for many devs that so many of the options are mediocre and they’re quite difficult to evaluate unless you have a lot of experience, and even then it takes a lot of time.
[1]: https://github.com/acacode/swagger-typescript-api
throwawaymaths|2 years ago
Nswag has important issues that are many years old still in their backlog.
1.6k issues, oldest unresolved 7 years old:
https://github.com/RicoSuter/NSwag/issues?q=is%3Aissue+is%3A...
SCUSKU|2 years ago
But it's been nice being able to make a backend change, run the code generator, and then be able to use whatever API in react. I hope this type of stuff gets developed more!
[0] - https://github.com/reduxjs/redux-toolkit/tree/master/package...
johnny_reilly|2 years ago
https://johnnyreilly.com/generate-typescript-and-csharp-clie...
simplesager|2 years ago
Feel free to email me at sagar@speakeasyapi.dev or join our slack (https://join.slack.com/t/speakeasy-dev/shared_invite/zt-1cwb...) . We're in open beta and working with a few great companies already and we'd be happy for you to try out the platform for free!
Hardwired8976|2 years ago
n_f|2 years ago
bebop|2 years ago
spjain|2 years ago
rileybrook|2 years ago
coplowe|2 years ago
dandevs|2 years ago
The generators are open source: https://github.com/fern-api/fern
We rewrote the code generators from scratch in the language that they generate code in (e.g., the python generator is written in python). We shied away from templating - it's easier but the generated code feels less human.
Want to talk client library codegen? Join the Fern Discord: https://discord.com/invite/JkkXumPzcG
handrews|2 years ago
It's also worth noting that most JSON Schema replacements I've seen that prioritize code generation are far less powerful in terms of runtime validation (I have not examined Fern's proposal in detail, so I do not know if this is true for them).
The ideal system, to me (speaking as the most prolific contributor to JSON Schema drafts-07 through 2020-12), would have clearly defined code generation and runtime validation features that did not get in each other's way. Keywords like "anyOf" and "not" are very useful for runtime validation but should be excluded from type definition / code generation semantics.
This would also help balance the needs of strongly typed languages vs dynamically typed languages. Most JSON Schema replacements-for-code-generation I've seen discard tons of functionality that is immensely useful for other JSON Schema use cases (again, I have not deeply examined Fern).
thesandlord|2 years ago
spjain|2 years ago
iluvapis|2 years ago
[deleted]
seer|2 years ago
GraphQL promised us apis that we can trust - since both the client and the server were implemented with the same schema, you would know for sure which requests the api would respond to and how, if it tried to do something outside of the schema, the server lib itself would through a 500 error. This allowed you to generate lean, typesafe clients.
OpenAPI kinda allows you to do that but for any other http api - I’ve written some code to use the schema as a “source of truth” for the server code as well, proving at compile time that the code will do the correct requests and responses for all the endpoints, paths and methods. So if you are reading the schema, you know for sure that the api is going to return this, and any change has to start from modifying the api.
And in turn this allows a “contract first” dev where all parties agree on the api change first, and then go to implement their changes, using the schema as an actual contract.
Combine this with languages with expressive type systems, and it allows you a style of coding thats quite nice - “if it compiles it is guaranteed to be correct”. Now of course this does not catch all bugs, but kinda confines them to mostly business logic errors, and frees you from needing to write tons of manual unit tests for every request.
Oh as a bonus it can be used for runtime request validation as well, which allows you to have types generated for those as well, for the client _and_ the server! Makes changes in the api a lot more predictable.
Client / server code generation can also be implemented as just type generation with no actual code being created, sidestepping a lot of complaints about code generators.
I did package it up as OS https://github.com/ovotech/laminar but no longer have access to maintain it as I no longer work there unfortunately.
vesinisa|2 years ago
Just wanted say that this is very cool and I find it hard to understand why this is not already the norm in 2023. I've done something quite similar in a proprietary project (I called it "spec-driven development" in reference to "test-driven development").
I would first start by writing the OpenAPI spec and response model JSON schema. I could then write the frontend code, for example, as the API it called on the server was now defined. Only as the last step I would actually integrate the API to real data - this was especially nice as the customer in this particular project was taking their time to deliver the integration points.
All the time during development the API conformity was being verified automatically. It saved me from writing a bunch of boilerplate tests at least.
rattray|2 years ago
Unfortunately we don't yet have a "try now" button, and our codegen is still closed-source, but you can see some of the libraries we've generated for companies like Modern Treasury and sign up for the waitlist on our homepage.
Always happy to chat codegen over email etc.
saarons|2 years ago
satvikpendem|2 years ago
taeric|2 years ago
To add my difficulty, the document generation inside Sphinx was less than up to date. Such that I didn't even get the pretty docs.
moondowner|2 years ago
jordiburgos|2 years ago
It saves hours and hours of development time. And the ability to regenerate the whole application on spec changes is amazing.
BerislavLopac|2 years ago
It is not a specification to define your business logic classes and objects -- either client or server side. Its goal is to define the interface of an API, and to provide a single source of truth that requests and responses can be validated against. It contains everything you need to know to make requests to an API; code generation is nice to have (and I use it myself, but mainly on the server side, for routing and validation), but not something required or expected from OpenAPI
For what it's worth, my personal preferred workflow to build an API is as follows:
1. Build the OpenAPI spec first. A smaller spec could easily be done by hand, but I prefer using a design tool like Stoplight [0]; it has the best Web-based OpenAPI (and JSON Schema) editor I have encountered, and integrates with git nearly flawlessly.
2. Use an automated tool to generate the API code implementation. Again, a static generation tool such as datamodel-code-generator [1] (which generates Pydantic models) would suffice, but for Python I prefer the dynamic request routing and validation provided by pyapi-server [2].
3. Finally, I use automated testing tools such as schemathesis [3] to test the implementation against the specification.
[0] https://stoplight.io/
[1] https://koxudaxi.github.io/datamodel-code-generator/
[2] https://pyapi-server.readthedocs.io
[3] https://schemathesis.readthedocs.io
madeofpalk|2 years ago
This is still a win because you can still generate all your clients in sync with your API spec rather than doing all that manually.
kelnos|2 years ago
I agree that the official codegen is not that great. One of my former colleagues started guardrail[0] to offer better client -- and server -- codegen in Scala for a few different http/rest frameworks. Later, I added support for Java and some Java frameworks. (I haven't worked on the project in over a year, but from what I understand, it's still moving forward.)
Obviously that's a fairly limited set of languages and frameworks compared to what the official generators offer, and there are some OpenAPI features that it doesn't support, but guardrail is a good alternative if you're a Java or Scala developer.
> JSONSchema is too limited
I've run into some of the problems you've described, which can be a big bummer. For new APIs I'd designed, I took the approach of designing the API in a way that I knew I could express in OpenAPI without too much trouble, using only the features I knew guardrail supported well (or features I knew I could add support for without too much trouble). It's not really the ideal way to design an API, but after years of that sort of work, I realized one of the worst parts of building APIs is the tedious and error-prone process of building server routes or a client for it, and I wanted to optimize away as much of that as possible.
Ultimately my view is that if you are writing API clients and servers by hand, you're doing it wrong. Even if you end up writing your own bespoke API definition format and your own code generators, that's still better than doing it manually. Obviously, if something like OpenAPI meets your needs, that's great. And even if you don't like the output of the existing code generators, you can still write your own; there are a bunch of parser libraries for the format that will make things a lot easier, and it really isn't that difficult to do, especially if you pare your feature support down to the specifics of what you need.
[0] https://guardrail.dev
Cthulhu_|2 years ago
It's only useful for generating types; most generators' APIs are stubs at best, which means it's pretty much useless for evolving API specifications.
JSON has its limitations, in that its type system is different enough from other languages that back-end generated code often feels awkward.
I think that the foundation should take ownership of the generators and come up with a testing, validation & certification system. Have them write a standardized test suite that can validate a generated client, making sure there's a checklist of features (e.g. more advanced constructs like `oneOf` with a discriminator, enums, things like that).
And they should reduce the number of generators. Have one lead generator for types, then maybe a number of variants depending on what client the user wants to use. But those could be options / flags on the generator.
Of course, taking a step back, maybe OpenAPI and by extension REST/JSON is a flawed premise to begin with; comparing it with e.g. grpc or graphql, those two are fully integrated systems, where the spec and protocol and implementation are much more tightly bound. The lack of tight bounds (or strict standards for that matter) is an issue with REST/JSON/OpenAPI.
layoric|2 years ago
Another way of handling this is getting the server your are interacting with to be able to generate the code directly based on their own internal knowledge of how the APIs are put together. This puts more onus on the library creators to support languages etc, but provides a much better experience and better chance things will 'just work' as there are just less moving parts.
ServiceStack is a .NET library that does this with 'Add ServiceStack Reference'[0] which enables a direct generation of Request and Response DTOs with the APIs for the specific server you are integrating with. IDE integration is straight forward since pulling the generated code is just another web service call. Additional language generation are integrated directly. It had trade offs but I'm yet to see a better dev experience.
[0] https://servicestack.net/add-servicestack-reference
(Disclaimer I work for ServiceStack).
nerdponx|2 years ago
sthuck|2 years ago
Anyone care to suggest alternatives though, assuming we want to call from node to python? I actually believe that having api packages with types is one of the only things startups should take from the enterprise world. I thought about GRPC, I had good experience with it as a developer, but the previous company had a team of people dedicated just to help with the tooling around GRPC and Protobufs.
So I picked OpenAPI, figuring simple is better, and plaintext over http is simpler. and currently I do believe it's better than nothing, but not by much. I am actually in the process of trying to write my own codegen and seeing how far I can get with it.
are protobuf's with GRPC really the way to go nowadays? should a startup of 20 developers just give up and document api in some shared knowledge base and that's it?
easton|2 years ago
https://github.com/RicoSuter/NSwag (It sucks in any OpenAPI yml, not just ones from Swashbuckle/C#)
dsinghvi|2 years ago
Checkout out this demo: https://www.loom.com/share/42de542022de4e55a1349383c7a465eb. Feel free to join our discord as well: https://discord.com/invite/JkkXumPzcG.
onetrickwolf|2 years ago
LelouBil|2 years ago
bob1029|2 years ago
That said, I didn't like the amount of moving pieces, annotation soup in code, etc. I got rid of all of it. Instead of relying on a fancy developer web portal with automagically updating docs, I am maintaining demo integration projects in repositories our vendors will have access to. I feel like this will break a hell of a lot less over time and would be more flexible with regard to human factors. Troubleshooting OpenAPI tooling is not something I want myself or my team worrying about right now.
ciwchris|2 years ago
groestl|2 years ago
jontro|2 years ago
For internal projects we use grpc which is a breeze to use in comparison.
unknown|2 years ago
[deleted]
jasonHzq|2 years ago
[deleted]