Setup test-data for Quarkus integration tests
In Spring-Boot, creating test-data was easily done via the `@SqlGroup`. Quarkus does not support this feature. Even worse: If you want to run Quarkus-tests against a native image, you don't have access to the internals of the project at all. This article describes my way of solving the issue of providing predictable test-data to integration tests.
There are some basic rules when writing stable automated tests in software development.
- A test should be independent of other tests. You cannot rely on tests running in a specific order, because you might want to run single tests in isolation.
- You should be able to run a test multiple times without them failing.
- A test should ideally test one thing.
- Tests should be fast to run.
The last two points may contradict each other, so you have to be considerate about that.
For unit-tests, this is usually easy to achieve all goals, because the units we are testing seldom rely on stored data. If you test an isolated function of component, you can very well determine the input and compare the output to expected values.
For testing database access functions, this is more complicated. You have to make sure that the database stores predictable data before running the test. Any leakage of data may cause it to fail, leaving you with flaky tests.
Integration tests and end-to-end tests are even more difficult in that respect. You may have multiple data-sources that have to be reset to a predictable state simultaneously.
Database tests
Spring Boot provides a simple solution to this topic: The @Sql
and @SqlGroup
annotations. They allow you to specify an SQL-file that is executed prior to
each test.
Quarkus does not have that feature, so what to do? A colleague of mine suggested to use Flyway-migrations to achieve the goal. I think another approach is easier.
Let’s have a closer look at the classes concerning user-logins: First, we have the entity-class, and we use Panache to access the database.
@Entity(name = "user_password")
@Getter
@Setter
public class UserPasswordEntity extends PanacheEntity {
public static Optional<UserPasswordEntity> findByUsername(String username) {
return find("username",username).firstResultOptional();
}
private String username;
// More fields coming here
// ...
}
Then, there is the gateway-implementation, i.e. the service that uses the entity
to implement the UserAccounts
interface defined in the core
-module.
@RequiredArgsConstructor
@Singleton
@Slf4j
public class UserPasswordGatewayImpl implements UserAccounts {
public static final int ITERATIONS = 10;
@Override
@Transactional
public void addUser(String username, char[] password) {
new UserSaveOperation(username, password).addUser();
}
// ...
// more methods coming here
}
UserSaveOperation
is a helper class, to make the code easier to read.
Internally, it uses the UserPasswordEntity
to work with the database.
When writing a test for UserPasswordGatewayImpl
, we need make sure that all
users are deleted from the database contains no users. Usually, we would have an
SQL-file containing the statement delete from user_password
and add that the
@SQL
annotation. But we can also use Panache to clear the database.
@RequiredArgsConstructor
@QuarkusTest
@Slf4j
class UserPasswordGatewayImplTest {
public static final char[] PASSWORD = "password".toCharArray();
public static final char[] OTHER_PASSWORD = "other_password".toCharArray();
private final UserAccounts passwordService;
@BeforeEach
@Transactional
public void cleanup() {
UserPasswordEntity.deleteAll();
}
@Test
void correctPassword_isVerified() {
passwordService.addUser("testuser", PASSWORD);
assertThat(passwordService.verifyPassword("testuser", PASSWORD)).isTrue();
}
@Test
void wrongPassword_isNotVerified() {
passwordService.addUser("testuser", PASSWORD);
assertThat(passwordService.verifyPassword("testuser", OTHER_PASSWORD)).isFalse();
}
// ...
// More tests here
}
If we have more tables and entity classes, we could extract the @BeforeEach
method into a super-class that you use for all tests. This would essentially
have the same effect as an @SQL
annotation. The advantage if using Panache is,
that we don’t have to worry about SQL-dialects anymore.
Integration tests
We also like to have integration tests for our application, and Quarkus offers two kinds of such tests.
”Unit” tests of the whole application
We can use the @QuarkusTest
annotation in the app
-module. The tests are then
run in the “test” phase, along with unit tests, but we can still test the whole
application and usually do this by accessing the rest-api
using rest-assured
or similar testing libraries.
In this case the test will run in the same container as the application, and we can wire specific beans to our test. Essentially, we can use the same method for database resets like we did in the database tests.
Real integration tests
We can use the maven-failsafe-plugin
to run real integration tests. This
happens in Maven’s integration-test
phase, after the JAR-file or the native
image has been built. The test is running against the compiled application.
Quarkus proposes the following way of writing such tests: First write a test
using the @QuarkusTest
annotation, using rest-assured
and the complete
database setup. Then add a subclass
@QuarkusTest
public class LoginResourceTest {
@Test
public void loginEndpoint_returnsJsonWebToken_forAuth() {
String token = given()
.body(Map.of("username", "testuser", "password", "abc"))
.contentType(ContentType.JSON)
.when()
.post("/api/login")
.body()
.jsonPath()
.getString("token");
given()
.header("Authorization", "Bearer " + token)
.when()
.get("/api/users/me")
.then()
.assertThat()
.statusCode(200)
.body("userName", is("testuser"));
}
}
Then add a subclass of the test. The class name must end in IT
, so that the
maven-failsafe-plugin
picks it up, and it is annotated with
@QuarkusIntegrationTest
, but nothing else is added.
@QuarkusIntegrationTest
public class LoginResourceTestIT extends LoginResourceTest {}
We now have a problem: The test is not running inside a Quarkus container. We cannot use Panache to clear the database. We also don’t have (easy) access to the database configuration. We have to do it differently, and those are the ways that came to my mind:
- Read the database configuration from
application.properties
and create a JDBC-connection manually, in order to access the database. - Run a new Quarkus command-line app with the same configuration, in order to get a new container with access to database entities.
- Add a “test-setup” endpoint to the application and can be call it to reset the database.
Building a JDBC connection manually should be feasibly, but it feels odd to re-implement the config-reading part, when it is already implemented in Quarkus. We could also use internal Quarkus classes to read the config. But as I said: “internal”… It is probably not meant to be done that way, and future versions might change those classes. I also couldn’t find documentation about that, and I have become very suspicious to using undocumented features of a framework over the last decade.
I tried running adding a new main-class for a Quarkus-CLI application and run it
using Quarkus.run()
before each test-case. Sadly, it also always started the
http-server on the testing-port, and it did not quit properly, so when the test
started the real application, it could not bind to the correct port. I am sure
there are ways around it, but at that point I thought that the last option would
probably be easier to implement. Also, even if Quarkus is fast on startup, there
is a certain overhead from running a new Quarkus application before each test.
And I assumed that the last option has faster test-runs in this respect.
The nice thing about the last option is: We are not restricted to resetting the database. In the future, we will have more external services that have to be cleared before each test: ElasticSearch, S3 and more. If the reset-action is part of the application, we can integrate it the same way as every other use-case, using clean architecture methods.
On the other hand, adding a “test-setup” endpoint to the application is a potential security risk. An attacker could erase all data, if it is not correctly secured. By all means, it should be disabled completely in a production deployment.
I decided to use option three anyway, and tackle the security issues as follows:
- Add a config property
gachou.dangerous-test-setup-access.token
. - If, and only if this property is set and the correct access-token is passed as
request header
X-Gachou-Test-Setup-Token
, the endpoint will work. - In the test, we use the annotation
@TestProfile(TestProfileForTestDataSetup.class)
to activate a test-profile that setsgachou.dangerous-test-setup-access.token
in the application under test.
We also use some more helper classes to facilitate usage of the whole mechanism… And the test-setup endpoint not only has the power to wipe data, but also to create test-user, maybe even more in the future.
The implementation
The clean architecture approach of the whole application means that the classes for our use-case are distributed across multiple modules:
In the rest-api
module contains the TestSetupResource
It does not use the
regular security mechanism, but checks the request header directly. This is
because we may have to use it without any users in the database.
Maybe it is a good idea to use a config-property to initially create an admin user with a given password. The endpoint could then be allowed for the admin only. I have to think this through first.
@Produces(MediaType.TEXT_PLAIN)
@Consumes(MediaType.APPLICATION_JSON)
@Path(TestSetupResource.PATH)
@PermitAll
public class TestSetupResource {
public static final String PATH = "gachou.dangerous-test-setup-access.token";
@ConfigProperty(name = DangerousTestSetupProperties.TOKEN)
public Optional<String> expectedAccessToken;
@Inject
TestSetupService testSetupService;
@POST
public Response resetGachou(@HeaderParam("X-Gachou-Test-Setup-Token") String accessToken,
@RequestBody TestSetupRequest testSetupRequest) {
if (expectedAccessToken.isEmpty()) {
return Response.serverError().entity("Testing endpoint disabled").build();
}
if (!expectedAccessToken.get().equals(accessToken)) {
return Response.status(403).build();
}
testSetupService.wipeData();
testSetupRequest
.getUsersToCreate()
.forEach(user -> testSetupService.addUser(user.getUsername(), user.getPassword()));
return Response.accepted().build();
}
}
The resource class uses the TestSetupService
, which is placed in the core
module.
@RequiredArgsConstructor
@Singleton
public class TestSetupService {
private final ResetDatabase resetDatabase;
private final UserAccounts userAccounts;
@Transactional
public void wipeData() {
resetDatabase.resetDatabase();
}
@Transactional
public void addUser(String username, char[] password) {
userAccounts.addUser(username, password);
}
}
ResetDataBase
and UserAccounts
are interfaces in the core
-module, that
have an implementation in the database
-module. They do the actual work. When
we add more modules to the application, we can implement reset-methods for these
modules as well.
Conclusion
To follow up on the basic rules from the beginning of this article:
- We can now write integration tests that are independent of other tests.
- We can now run integration tests multiple times
Concerning “A test should ideally test one thing” and “Tests should be fast to run”, there might still be a problem. If we have to reset database, elastic-search, s3, message-queues and other things, before running the tests, this might have impacts on the speed of test-setup. In that case, I would tend to reduce the overhead of test-setup by writing fewer, but larger tests.
The code-snippets in this post are taken from the Gachou backend, more specifically, the commit of April 30, 2022