Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should each itest class spin up a fresh Cryostat instance? #776

Open
andrewazores opened this issue Dec 6, 2021 · 2 comments
Open

Should each itest class spin up a fresh Cryostat instance? #776

andrewazores opened this issue Dec 6, 2021 · 2 comments
Labels
question Further information is requested test

Comments

@andrewazores
Copy link
Member

Currently the itest failsafe runner is responsible for starting the Cryostat test instance before any of the integration test suite classes execute, and for tearing down the test pod after the whole suite is completed. This is nice because it minimizes the amount of time spent waiting for Cryostat instance startups during testing - it just has to happen once. However, the downside is that all integration tests need to absolutely always clean up fully after themselves, even if they fail or complete abnormally, otherwise their lingering test resources are highly likely to interfere with the expectations and results of other tests and lead to cascading failures that can be difficult to diagnose.

One plain and simple way to solve this problem is to not use the failsafe runner for starting/stopping the Cryostat instance under test. We already have some test code infrastructure within the itest utils for starting/stopping containers by calling out to Podman by a process exec, which we use for testing things like target JVM discovery or interleaved requests to multiple targets. This same test infra could be used to actually start up the Cryostat test instance. Each integration test class could then be responsible for starting up its own Cryostat instance in a @BeforeAll and for stopping it in an @AfterAll. This would guarantee that every integration test is fully isolated from all the others, and would simplify writing the tests because cleanup actually would not be required at all (other than any needed in between tests of the same class, but this would probably be best avoided because it comes with some of the same problems we're trying to solve here). Then there should be no possibility of cascading itest failures. The downside is that the whole itest suite would take significantly longer to execute, since we have to wait for n Cryostat startup/teardowns instead of just 1. IMO that's a worthwhile tradeoff.

@andrewazores
Copy link
Member Author

The downside is that the whole itest suite would take significantly longer to execute, since we have to wait for n Cryostat startup/teardowns instead of just 1. IMO that's a worthwhile tradeoff.

Actually - if each itest class has its own independent Cryostat instance-under-test, then these instances can be created with randomized tmpfs bindings for all volumes and with randomized HTTP and JMX port numbers, which can be passed as parameters to the test code. Then the itest classes can be run in parallel. Most of the time spent during itests is really idle, waiting for time to pass and JFR data to be recorded or waiting for external target containers to spin up/be discovered/tear down/get lost, so even on low-CPU test runners this strategy might actually reduce the total wall time for the itest suite. The limiting factor would likely become how much RAM is available on the test runner to divide between Cryostat instances and any external target containers spun up by each test.

@andrewazores andrewazores moved this to Todo in 2.4.0 release Apr 25, 2023
@andrewazores andrewazores moved this from Todo to Backlog in 2.4.0 release May 23, 2023
@andrewazores andrewazores moved this from Backlog to Stretch Goals in 2.4.0 release May 23, 2023
@andrewazores
Copy link
Member Author

See https://github.com/cryostatio/cryostat/issues/1496 for a more up to date approach to this idea.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested test
Projects
No open projects
Status: Stretch Goals
Development

No branches or pull requests

1 participant