One of the most important qualities of a good software developer is being able to produce reliable code. The most obvious ways to do this are by manually testing your code and by writing automated tests to exercise it. Automated testing is not a new concept. But, in my experience, it is not as widely practiced or understood as it should be. This article will touch on the different types of automated testing and future articles will build upon this foundation.
There are several categories of automated tests: Unit, Integration, and End-to-End (E2E for short). There are also Performance tests too but, in my experience, these are not part of the everyday development process. Instead, Performance testing is typically done at various points during the release cycle if they are done at all.
The general rule is that unit tests should form the bulk of your automated tests as they are stable, run fast, are quick to write, and provide rapid feedback on the health of your source code. E2E tests are inherently fragile and should make up the smallest percentage of your automated tests. Integration tests would fall in between the two. Often this kind of break-down of automated testing is represented as a pyramid similar to this:
As mentioned above, unit tests are typically very small and very focused. As a result of this, they tend to be very stable. They also heavily rely on mocks, spies, fake objects, and dependency injection.
A mock is a type of object that identifies itself as something but does not provide a backing implementation. Instead, it will record any calls to its methods and properties so that tests can verify the results. Mocks also allow the developer to define new behaviors for a function. For instance, the mock could be configured to always return a specific result for a method. This allows near-complete control over how a function operates and allows all kinds of scenarios to be exercised to ensure the function behaves as expected.
A spy is a mix of a mock and a real object. It has the real implementation but records all calls to its methods and allows the developer to define new behaviors as well. Typically, when I write unit tests, I am invoking the method I am testing via a spy.
Fake objects are essentially real data objects but are populated with fake data. For example, you may have a User data object in your project. In tests, you could create a factory method that would generate a real User object but filled with fake data. Something that makes it look real, but isn’t. This makes writing tests easier and quicker since you won’t have to know how to construct any complex data objects nor will you need to spend time doing so.
Dependency injection, at a high level, is a way to provide external classes or objects that your function depends on. There are many frameworks and strategies available for the different languages and platforms and will not be covered here. For unit tests, one would create a mock for each of the classes that are needed by the function being tested.
When I write my unit tests, I tend to focus on testing a single method. Any functions that the method calls would be stubbed out so that I have complete control over the flow. When I set up my collection of tests, I will always set it up for “happy path testing”. Meaning, I will set up functions to return valid data that will trigger the most likely used code path. I find this makes it easier to understand what the expectations are for each test and helps keep them small. When I start writing the “non-happy path” tests, it’s then straightforward enough to change the functions to do what they need to do in order to trigger those cases.
Integration tests are necessary when you need to test that your code works within a subsystem outside of your control. For instance, let’s say you are writing a SQL query to fetch data from a database. A unit test is no good here since it would not be able to verify the SQL actually works when executed by the database. Instead, an integration test is what is needed since it will run the query on a real database. If you are an Android developer, Room DAO’s and migrations might immediately come to mind. Another example is if you are writing an HTTP interceptor and you need to verify it does what it needs to do without interfering with the rest of the HTTP call.
Integration tests rely less on mocks. You might still use them, but it isn’t necessary. Since they bring in external systems, they will run more slowly than unit tests.
These are the most fragile and expensive of the types of testing. They will run through the application as your target users would run it. For instance, if you’re building a web application then you would be testing your application through a browser. If you’re building a mobile app, the tests will be running your application on an emulator or a real device. This can cause unexpected behavior since it isn’t just your application that is running anymore. The tests also have to compete with other running applications, background processes, system resources, slow networks, etc. As a result of this, they run slower and can intermittently fail due to unexpected or unaccounted for situations. For example, a system pop-up that prevents the test from tapping on the screen.
E2E tests should not be confused with User Acceptance Testing (UAT). UAT is a phase in the software development life cycle, whereas E2E tests describe a specific test scenario. UAT should also not be confused with Acceptance tests. Depending on who you ask, some might say Acceptance tests are another name for E2E tests. Others might suggest that they are different in subtle ways. For instance, Atlassian defines Acceptance tests as tied to business requirements while E2E is tied more to user behavior.
Regardless of the definition, the main thing to keep in mind is that both E2E and Acceptance tests will test the application from the user’s perspective. If you are working on an e-commerce app, an example of an E2E test could be that the user should be able to log in, search for products, add them to the cart, see a dialog with a specific message, and be able to check out without seeing an error.
In my experience, the testing strategy for the projects that I have been on have looked more like this:
In my distant past, I was on a project where integration tests formed the foundation of our testing strategy instead of unit tests! The tests would take *hours* to run. I could speculate as to the reasons why these approaches were taken, but the important thing is that there were automated tests in place.
Regardless of what kinds of tests you write or what kind of automated testing strategy you adopt, it is better to have some tests rather than none. Automated tests act as a safety net that will notify you when something isn’t working as expected. This saves time since you don’t have to go through every part of the codebase to verify any changes that you made didn’t break something. It also acts as documentation that shows how code is expected to behave.
I would love to hear about others’ experiences with automated testing. Are your definitions similar, or wildly different? How have your projects used automated testing? Let me know in the comments below!
As always, if you have any questions, please ask!