Keep calm and release your API in prod

Writing a REST API has become a standard in the industry for a few years already, just like the anguish of deploying it live, and watching Sentry spam your inbox of bug reports because you or your QA (which is you, with a hat) were not careful or imaginative enough.

We have the same solution since the beginning of (Epoch) time:

Write up tests

There are no secrets, tests, are the base of a solid products, in this case, automated functional tests, to reproduce as closely as possible real world scenarios ("I thought you needed the boolean as an integer ?").

After a lot of research on the internet, I found Tavern. My criteria were the following for the perfect tool:

  1. Free
  2. Open Source
  3. Self hosted (we do not want to depend on someone else's infra)
  4. CLI based (so we can integrate it in our CI/CD pipeline)
  5. Up to date (some good solutions were not maintained anymore, unfortunately)

With Tavern you write your test cases in YAML, there is no code to write, you just need to have the python runtime to start your tests.

One of the main advantage with writing functional tests this way is that they are independent from your code: you can migrate or rewrite your API, your tests sets are still valid and usable.

Context

We already had an API in place, for a F&B startup, with an Angular web app and a Flutter mobile app we developed as well, but this time we had to integrate with a business partner.

They would call our API in their pipeline and we could not afford any downtime, breaking change, instability, and so on. But more importantly, they would be the first ones to integrate it. So I had to put myself into an integrator's skin and ask myself: "How many things could I do wrong based on this doc ?". This is how I prepared my journey to inner peace:

Installation

I added tavern in a requirements.txt file and to make it easy wrote the following Makefile:

install:
	python -m venv env
    . env/bin/activate && pip install -r requirements.txt
    
clean:
	rm -rf env/

Writing tests

Now here is an anonymized test sample in test_profile.tavern.yaml:

test_name: Fetch the current profile

includes:
  - !include includes.yaml

stages:
  - &authenticate
    name: Authenticate
    request:
      url: "{base_url}/auth"
      method: POST
      json:
        email: "{email}"
        password: "{password}"
      headers:
        Content-Type: "application/json"
    response:
      status_code: 200 
      json:
        access: !anystr
        refresh: !anystr
      save:
        json:
          token: access
  - name: Fetch the profile
    request:
      url: "{base_url}/profile"
      method: GET 
      headers:
        Authorization: "Bearer {token}"
    response:
      status_code: 200 
      json:
        id: !anystr
        name: !anystr

---

The whole test is contained between test_name and `---`, you can add multiple tests in the same file. We kept all tests related to the profile in the same file.

You can write a test containing multiples stages, here we want to test if we can retrieve the profile of the current user:

  1. We authenticate ourselves and retrieve our access token (JWT). We verify that the HTTP code is 200 and that the response contains 2 strings: access and refresh. We can use the save key to save the response or elements of the response into a variable. Here we are saving the access token into a variable called token.
  2. we fetch our profile using token which was defined in stage 1. Then we verify that the HTTP code is 200 and the response contains 2 strings: id and name

A few notes:

As you can see we use !anystr so we can test the data type of a key in the response (a lot more are supported), but we can test as well for a value like "Joe", True or 42.

Also, the includes allow you to put in common some elements and link it to your tests. Here is an anonymized version of the includes.yaml file:

name: API variables
description: Variables to manage the tests of the api 

variables:
  base_url: http://localhost:8080/api
  email: "{tavern.env_vars.EMAIL}"
  password: "{tavern.env_vars.PASSWORD}"

It means that every time we include this file, we have access to those three variables. As you can see for email and password it is also possible to load data from the environment.

Even more notes:

Note the little & next to authenticate on the first stage ? This is actually a way to references stages into other tests. We know that in all those tests, we will need to authenticate and we would like to avoid copy/pasting the first stage all the time. So instead, we create a reference and we can now call it like this:

test_name: Fetch an admin profile

includes:
  - !include includes.yaml

stages:
  - *authenticate
  - name: Fetch an admin profile
    request:
      url: "{base_url}/profile"
      method: GET 
      headers:
        Authorization: "Bearer {token}"
    response:
      status_code: 200
      json:
        id: !anystr
        name: !anystr
        admin_code: !anystr

---

Running tests

To run the tests you simply need to run in your CLI:

EMAIL="xxx" PASSWORD="xxx" tavern-ci test_profile.tavern.yaml

And you should have this output:

===================================================== test session starts =====================================================
platform linux -- Python 3.9.5, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
rootdir: /home/pierre/Dev/profile-api-tests, configfile: pytest.ini
plugins: tavern-1.14.2
collected 2 items                                                                                                            

test_profile.tavern.yaml ...........                                                                                    [100%]

===================================================== 2 passed in 0.73s ======================================================

Conclusion

We then kept on writing tests, one file per "module" and ended up with a very nice set of YAML files.

Now we were confident enough to release the API for a B2B integration partner. No more nightmares about receiving an email starting by "Did you know that we get a 500 if we...", just a bunch of tests based on said premonitory dreams.

If you have a problem and no one else can help. Maybe you can hire the Kalvad-Team.