- Published on
Robust APIs Through OpenAPI Generation
- Authors
- Name
- Tom Hacohen
- @TomHacohen
Svix is the enterprise ready webhooks sending service. With Svix, you can build a secure, reliable, and scalable webhook platform in minutes. Looking to send webhooks? Give it a try!
You can't break your API. While humans can (reluctantly) adjust to drastic UI redesigns, API clients will just stop working with even the slightest change. This is why it's important for API providers and API consumers to agree on how to interact.
Svix customers interact with the Svix service using our APIs and SDK. Making sure that their implementations remain stable is a top concern for us, and we've developed a variety of tools and methods to ensure that we never break our API.
In this post we will show how we use our OpenAPI spec as a way to ensure we don't accidentally break our API as well as how we leverage it to keep our API consistent and high quality.
Generating our OpenAPI spec
OpenAPI (formerly known as Swagger) is an API description standard. OpenAPI can be used to precisely describe HTTP APIs in a widely understood format. You can use the OpenAPI spec to generate documentation, SDKs, and a variety of other assets.
There are two main approaches when it comes to maintaining OpenAPI specs: OpenAPI first, and code first.
OpenAPI first essentially means that people write the OpenAPI spec by hand, and then use that to generate server side API stubs which they then write code for. So for example, if you would like to add a new create_entity
API you will edit the OpenAPI spec and have it generate the function signature for the server side implementation.
Code first means that you write your code like you normally use, and your OpenAPI spec is automatically generated from the code. So for example, when adding the create_entity
API from above, you'll just write the code like you normally would, and the OpenAPI spec will be updated for you.
I am strongly in the "code first" camp, and this is what we do at Svix as well. It really boils down to: writing OpenAPI specs by hand is verbose and error-prone, writing code is easier than writing valid OpenAPI, this let us easily generate global rules and conventions, and many other reasons.
Another advantage of generating from code, is that we can have enforcement be created at the same place as annotation. E.g. we can add regex restrictions on a string and have it automatically annotate the OpenAPI spec accordingly. So the code and the spec are always in sync. This also means our OpenAPI spec is very well annotated, here is an example from our docs.
Our workflow is simple we just write code and additional annotations and documentation as needed (usually automatic), and the OpenAPI spec is generated for us.
For example, in our codebase when we use a specific type in a structure, e.g. using ApplicationUid
in the structure parsing the request body like so:
#[derive(Clone, Serialize, Deserialize, Validate, JsonSchema)]
pub struct ApplicationIn {
...
/// Optional unique identifier for the application.
#[validate]
pub uid: Option<ApplicationUid>,
}
We automatically get an enriched OpenAPI property, which includes all of the validation rules we impose on the type and that are automatically checked in code:
"uid": {
"description": "Optional unique identifier for the application.",
"type": "string",
"maxLength": 256,
"minLength": 1,
"pattern": "^[a-zA-Z0-9\\-_.]+$",
"example": "unique-app-identifier",
"nullable": true
}
Utilizing our OpenAPI spec for a better API
Because our API and OpenAPI are always in sync, changes to our API will lead to a change in the OpenAPI spec. We commit our generated OpenAPI spec to version control (Git) and have CI tasks that automatically generate the spec on every PR and compare it against the persisted to version to verify that they are identical. If they are different the CI check fails.
This means that we can never accidentally change our API, as every change to the API will fail CI unless we also generate a new version of the OpenAPI spec and include it in the PR.
In order to make this experience nicer, we pretty-format the generated JSON file so that each line is on its own line and use diff
to create a diff that's shown in CI which makes diff work nicely, and review much easier for reviewers.
We also utilize the Github codeowners functionality to make sure that the spec is reviewed by the team that's in charge of the API design to ensure that the API is well documented, is appropriately named, and is ready to be released. This can be done by adding the following line to .github/CODEOWNERS
at the base of the repo:
/openapi.json @svix/ApiReview
This not only saves us from accidental API breaks, but also keeps our API quality high and our APIs consistent. Remember, once an API is "out", it is out. You don't want to be supporting a new API, a new field, or a new query parameter forever (or at least until the conclusion of a lengthy deprecation process) because your release process made it too easy to accidentally release changes. This can lead to having a confusing API and potentially costly or buggy functionality inadvertently exposed.
Another advantage of reviewing the OpenAPI spec is that it lets us nicely review the contract we are promising to our customers (including restrictions and other annotations), including parts that would not always be completely obvious in code as they are automatically generated.
It really is as simple as it sounds, and it's a simple process that most teams can adopt to drastically improve the quality of their APIs.
Taking OpenAPI to the next level
Having your API described in a formal (and popular) API description spec like OpenAPI unlocks many more possibilities.
You can run linting on the specification to ensure that it complies with whatever rules you would like to enforce for your APIs. You can ensure names and paths are consistent, usage of singular vs. plural in path names, and much more.
We also have a post-processing step after our OpenAPI generation where we enrich our OpenAPI spec with automatically generated examples of how to use our SDKs, our CLI tool, and even cURL (example from our docs).
Speaking of docs, our API reference docs are always up to date, as they are generated live from our OpenAPI spec which as we said is generated from our code.
Last, but not least, we also automatically generate our SDKs and CLI tool from our OpenAPI spec, keeping them up to date, well documented, and consistent with the rest of our API. All happening automatically thanks to our OpenAPI spec. We plan on writing more about this soon, so if you're interested, follow us on one of the links below, or subscribe to our newsletter.
Closing words
Automatically generating an OpenAPI spec, committing it to Git, and using it during PR reviews as a way to keep our APIs under check has enabled us to have more control over our API, make sure we don't accidentally break it, and ensure that we maintain a consistent, high quality API as we evolve the product.
Not to mention, having an OpenAPI spec has enabled us to quickly iterate on our nine SDKs (and counting), our CLI tool, and our documentation, keeping them all high quality, up to date, and with zero maintenance effort.
Got any cool tricks and trips to improve the quality of APIs? We would love to hear. Please email us at contact@svix.com and let us know!
For more content like this, make sure to follow us on Twitter, Github, RSS, or our newsletter for the latest updates for the Svix webhook service, or join the discussion on our community Slack.