D3A approach to tests

Photo by Gabriel Gusmao on Unsplash

Like many other companies, we’ve been struggling with finding a balance in writing “just enough” tests. How much time should we spend on tests, what modules should be covered and what should be the coverage of each module? Then, there’s a question about test types. What should be covered by end-to-end tests versus unit & integration tests? And finally, there’s a question of how much should be covered by each of those test types.

The main problem that we were facing was having not enough automated tests. And if we had them, they were either legacy, broken, or outdated tests, both in end-to-end and unit & integration areas. It’s easy to get overwhelmed and not know where to start with such a package.

Luckily, our QAs had a comprehensive list of manual test scenarios for each module in our application, very well structured in the TestRail. We gathered this enormous list and we discussed how we should automate each test scenario.

The Testing Pyramid

But just before we dive into the approach, let’s focus on the testing pyramid and how we ended up realising how good it is. You can skip this section and go straight to The Approach.

You’ve probably heard of the testing pyramid. It’s about finding a balance in writing tests. If you haven’t heard of it, let me give you a little intro.

End-to-end tests are the most reliable when it comes to integrating all the pieces of your application. For example, you want to log in with a testing user to a real application running on a real server. You want to see whether the user can log in, sign up, or go through your purchase flow. These are some of the most important scenarios that you want to be sure are working all the time. One would think that you want to have such reliability for every single scenario. Well, the downside is that they are really slow. It can take dozens of minutes to run a full loop. You don’t want to run such tests for every single commit.

Integration tests are still about integrating several smaller pieces of your application and figuring out if they work. For example, you want to test whether your component displays all the data correctly in the UI, whether it communicates well with another component or whether a bunch of methods communicate well together. However, these are more isolated pieces of your application and they don’t interact with a real app. The upside is that they are much faster than end-to-end tests, however not fast enough.

Unit tests are focusing on testing each method of your component, service, and so on—whether they, for example, return correct values under different conditions or with different parameters provided. They are running really smooth, there’s no rendering, just pure code.

Where is the balance?

The key idea is that you should write very few end-to-end tests, a bit more integration tests, and a lot of unit tests. Only this way your tests will be comprehensive enough and yet quite fast.

Learning the hard way

We realised this the hard way. We had a lot of manual end-to-end test scenarios defined by QA engineers and we were eager to automate them all using end-to-end tests. We had a number of integration and unit tests but it was very light compared to those end-to-end scenarios. We believed that if we cover all possible scenarios with end-to-end tests, we’ll be out of the woods. We started implementing them and soon we realised that it’s not a way to go. The test run was really slow. And we learned our lesson. Luckily, barely 15% of those scenarios were written.

The Approach

Initially, we wanted to cover most of the application with end-to-end tests, but later on, we realised that it’s really time-consuming (and expensive) to run the selenium tests often and it simply didn’t make sense to cover many variations of the same scenario with end-to-end tests.

We figured that the end-to-end tests could only check very minor things, for example, the fact that a component is in the context of the live application. Testing various scenarios, for example, different data depending on what user is logged in, wasn’t efficient in end-to-end tests. Such scenarios were meant for integration tests. This seems obvious now, but when we started, many articles were talking about different approaches, and without the actual experience, we couldn’t see the light at the end of the tunnel.

With a comprehensive set of manual scenarios we agreed to have several meetings where we’d do the following:

  1. Discuss how each module, that is associated with a test scenario, works.
    Maybe there’s a misconception about how a module works. Maybe everyone expects a different outcome from the scenario. Maybe the scenario should be combined with other scenarios. Maybe there’s something else. It’s always good to be on the same page and clear out assumptions and align expectations. QAs and devs have very different points of view and we should take advantage of it.
  2. Determine the scenario priority. Whether the scenario is a smoke, sanity, regression, or a full-cycle scenario.
    Do we need to cover every tiny thing in this scenario right now? Is it really a crucial scenario for the app to function? Do we need to run this scenario all the time or can we only run it once a while to check the app’s health? Also, labelling tests make it easy to distinguish them from new feature tests.
  3. Define each functionality case you should test.
    After the discussion in the first point, you should have your views on the functionality aligned. You also have a pretty good idea of how crucial the scenario is. Now, you should be able to define what kind of things you should test exactly. We don’t want QAs’ and devs’ tests to be overlapping. We want to communicate and define the test cases.
  4. Assign some test cases to selenium and others to unit & integration.
    Lastly, take the defined cases and decide what type of test is the best candidate for each of them. As well as, you can already assign them to particular devs that will implement them.

Further QAs & devs sync

We discussed the already existing manual scenarios with QA engineers and devs, however, what about tests for new features or new regression tests? Without syncing QA engineers with software engineers, we can easily end up with another overwhelming list of manual scenarios or maybe slightly better, but still not good, with selenium tests covering every single aspect of the new feature.

With every new scenario, we should use the same approach and sync a QA engineer with a developer. They should both discuss how the module associated with the scenario works. Determine the priority for the test (maybe it’s not so important to write some test cases and we can only cover it partly). Define what exactly you want to test about a module and lastly, assign those parts to the appropriate test type and maybe already to a particular developer.

After this sync, QA knows how thoroughly the end-to-end test should cover the new feature and the dev knows what exactly should be tested using integration & unit tests. The sync can be done during a stand-up, a private meeting between the QA and the dev or just using written conversation over Slack. It doesn’t matter. The most important thing is the tight integration of QAs into the dev teams and their daily syncing.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store