There has been a shift in service oriented architectures (SOA) over the last few years towards smaller, more focussed “micro” services. There are many benefits with this approach such as the ability to independently deploy, scale and maintain each component and parallelize development across multiple teams. However, once these additional network partitions have been introduced, the testing strategies that applied for monolithic in process applications need to be reconsidered.
What is a Microservice?
The microservice architectural style involves developing single applications that can work together as a suite of small services, each running in its individual process and communicating with lightweight mechanisms such as using REST over HTTP. These services require bare minimum centralized management, can use different data storage technologies, and can be written in different programming languages.
A microservices architecture consists of focused, small services that together create a complete application. Every instance of a microservice represents a single responsibility within application. The real advantage is that, these services are independent of one another, which makes them independently deployable and testable.
Let’s look at some approaches for how to go about automated testing.
A unit test exercises the smallest piece of testable software in the application to determine whether it behaves as expected. Unit tests are typically written at the class level or around a small group of related classes. The smaller the unit under test the easier it is to express the behaviour using a unit test since the branch complexity of the unit is lower.
With unit testing, you see an important distinction based on whether or not the unit under test is isolated from its collaborators.
Sociable unit testing focusses on testing the behaviour of modules by observing changes in their state. This treats the unit under test as a black box tested entirely through its interface.
Solitary unit testing looks at the interactions and collaborations between an object and its dependencies, which are replaced by test doubles.
These styles are not competing and are frequently used in the same codebase to solve different testing problems.
An integration test verifies the communication paths and interactions between components to detect interface defects. Integration tests collect modules together and test them as a subsystem in order to verify that they collaborate as intended to achieve some larger piece of behaviour. They exercise communication paths through the subsystem to check for any incorrect assumptions each module has about how to interact with its peers.
Whilst tests that integrate components or modules can be written at any granularity, in microservice architectures they are typically used to verify interactions between layers of integration code and the external components to which they are integrating.
Examples of the kinds of external components against which such integration tests can be useful include other microservices, data stores and caches.
An integration contract test is a test at the boundary of an external service verifying that it meets the contract expected by a consuming service.Whenever some consumer couples to the interface of a component to make use of its behaviour, a contract is formed between them. This contract consists of expectations of input and output data structures, side effects and performance and concurrency characteristics.
Integration contract tests provide a mechanism to explicitly verify that a component meets a contract. When the components involved are microservices, the interface is the public API exposed by each service. The maintainers of each consuming service write an independent test suite that verifies only those aspects of the producing service that are in use.
Ideally, the contract test suites written by each consuming team are packaged and runnable in the build pipelines for the producing services. In this way, the maintainers of the producing service know the impact of their changes on their consumers.
An end-to-end test verifies that a system meets external requirements and achieves its goals, testing the entire system, from end to end.
In contrast to other types of test, the intention with end-to-end tests is to verify that the system as a whole meets business goals irrespective of the component architecture in use.
In order to achieve this, the system is treated as a black box and the tests exercise as much of the fully deployed system as possible, manipulating it through public interfaces such as GUIs and service APIs.
As a microservice architecture includes more moving parts for the same behaviour, end-to-end tests provide value by adding coverage of the gaps between the services. This gives additional confidence in the correctness of messages passing between the services but also ensures any extra network infrastructure such as firewalls, proxies or load-balancers is correctly configured.
End-to-end tests also allow a microservice architecture to evolve over time. As more is learnt about the problem domain, services are likely to split or merge and end-to-end tests give confidence that the business functions provided by the system remain intact during such large scale architectural refactorings.
The test pyramid helps us to maintain a balance between the different types of test
The concept of the test pyramid a simple way to think about the relative number of tests that should be written at each granularity. Moving up through the tiers of the pyramid, the scope of the tests increases and the number of tests that should be written decreases.
At the top of the pyramid sits exploratory testing, manually exploring the system in ways that haven’t been considered as part of the scripted tests. Exploratory testing allows the team to learn about the system and to educate and improve their automated tests.