Why TF should I stub dependencies for an E2E test?

E2E (or ‘end to end’) tests, when written for web applications, basically allow you to write scripts to interact with your application via a browser, and then assert that it responds correctly.

Common libraries that allow you to do this for Angular apps are Selenium/Protractor and Cypress (you should check out Cypress in particular).

The browser can be ‘headless’, meaning that you can run it in a command line, without a GUI (see headless chrome for an example). This means that these tests can be run by your build server to support a continuous integration/continuous deployment pipeline.

You can then develop scripts to click about, and test your most important user flows quickly and consistently, on your real application, in a very close approximation of how your users would click around. This is an attractive alternative to manually running these tests yourself by spinning up your application and clicking about. Humans are good at lots of things, but computers trump them every time when it comes to performing repeated actions consistently.

E2E tests are an example of black box testing.

If the tests fail, your application is broken, if they pass, your application may still be broken, but it is not terribly broken (as we have tests, ensuring that key user flows are still accessible).

This process relies on an assumption that these tests are very robust and not ‘flaky’ (they only fail when something is actually broken, not just randomly).

There are a few things that can cause E2E tests to randomly fail, but by far the most common in my experience is some form of network latency. Basically any time your code is interacting with something which can take an indeterminate amount of time to finish, you are going to experience some pain when writing automated tests for it.

The most recent (and most evil) version of this I have experienced is an application that is plugged into a blockchain and involves waiting for transactions to be mined. This can take anywhere from a few seconds, to forever.

Unless you are a lunatic, if your application involves any sort of transactions (purchases, bets, swiping-right), which involve user data, your automated tests will already be running on some sort of test environment, meaning that when you mutate data, it is not done on production data, but a copy. This comes with its own little pain points.

When talking to a truly stateful dependency (like an API plugged into a database), you will have to account for the state of the database when performing any actions in your browser test.

As one example, if you are logging in, you might have to first check whether the user has an account, then log in if they do, or create an account if not. This adds complexity to your tests and makes them harder to maintain/potentially flakier.

Another option is to control the test environment, and write (even more) scripts to populate the test database with clean data that can be relied on to power your tests (make sure that a specific user is present for example).

This removes complexity from the browser tests, but adds it to the scripts responsible for set up and tear down of data. This also requires that your test data is kept synchronised with any schema changes to your production database. Otherwise you run the risk of having E2E tests which can pass, even while your real application is broken, as they are testing on an out of date version of the backend.

(To clarify, these are all valid options, and there are solutions to many of the problems above)

However, while this approach solves the problem of accidentally munging user data, a backend plugged into a staging database can still be forking slow, as again you are at the mercy of network speeds and many other things out of your control.

Taken together, this is why I favour ‘stubbing’ the slow bit of your application (in many cases a backend service). This involves providing a very simple approximation of your Slow Ass Service (SAS), which follows the same API, but returns pre-determined fixture data, without doing any slow networking.

There is one major caveat to this approach.

It only works in a meaningful way if you have some sort of contract between the SAS and your frontend.

That is to say, your frontend and the SAS (Slow Ass Service), should both honour a shared contract, describing how the API should behave, and what data types it should accept and return. (for a REST api, swagger/openapi provides a good specification for describing API behaviour). Because this shared contract is written in a way that can be parsed, automatic validations can be made on both the frontend and the backend code to ensure that they meet the requirements.

If you were reading closely, you will notice I mentioned data ‘types’, so most likely you will require some sort of typed language on both the frontend and within the SAS (I would recommend TypeScript 🙂 ).

With this in place, as long as your stub implementation of the SAS also honours this contract (which can be checked in with the SAS source code), then you can feel happy(ish) that your stub represents a realistic interaction with the real SAS. If the contract changes, and your code doesn’t, it should not compile, and your build should fail (which we want!).

Now you have this in place, your E2E tests can be much less/not at all flaky, and as an added benefit, will run super fast in your build server, meaning that the dev workflow is slightly nicer too.

Again, there are gazillions of different ways of approaching this problem, and obviously you need to figure out what it is that you care about.

In my experience though, this presents a good compromise and has meant that I (finally) have managed to get E2E tests running as part of CI/CD without wasting huge amounts of time and causing headaches.

In a subsequent post I demo how to actually get this working in an Angular application that depends on a strongly typed third party library, which happens to be slow as hell 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *