You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the itest failsafe runner is responsible for starting the Cryostat test instance before any of the integration test suite classes execute, and for tearing down the test pod after the whole suite is completed. This is nice because it minimizes the amount of time spent waiting for Cryostat instance startups during testing - it just has to happen once. However, the downside is that all integration tests need to absolutely always clean up fully after themselves, even if they fail or complete abnormally, otherwise their lingering test resources are highly likely to interfere with the expectations and results of other tests and lead to cascading failures that can be difficult to diagnose.
One plain and simple way to solve this problem is to not use the failsafe runner for starting/stopping the Cryostat instance under test. We already have some test code infrastructure within the itest utils for starting/stopping containers by calling out to Podman by a process exec, which we use for testing things like target JVM discovery or interleaved requests to multiple targets. This same test infra could be used to actually start up the Cryostat test instance. Each integration test class could then be responsible for starting up its own Cryostat instance in a @BeforeAll and for stopping it in an @AfterAll. This would guarantee that every integration test is fully isolated from all the others, and would simplify writing the tests because cleanup actually would not be required at all (other than any needed in between tests of the same class, but this would probably be best avoided because it comes with some of the same problems we're trying to solve here). Then there should be no possibility of cascading itest failures. The downside is that the whole itest suite would take significantly longer to execute, since we have to wait for n Cryostat startup/teardowns instead of just 1. IMO that's a worthwhile tradeoff.
The text was updated successfully, but these errors were encountered:
The downside is that the whole itest suite would take significantly longer to execute, since we have to wait for n Cryostat startup/teardowns instead of just 1. IMO that's a worthwhile tradeoff.
Actually - if each itest class has its own independent Cryostat instance-under-test, then these instances can be created with randomized tmpfs bindings for all volumes and with randomized HTTP and JMX port numbers, which can be passed as parameters to the test code. Then the itest classes can be run in parallel. Most of the time spent during itests is really idle, waiting for time to pass and JFR data to be recorded or waiting for external target containers to spin up/be discovered/tear down/get lost, so even on low-CPU test runners this strategy might actually reduce the total wall time for the itest suite. The limiting factor would likely become how much RAM is available on the test runner to divide between Cryostat instances and any external target containers spun up by each test.
Currently the itest failsafe runner is responsible for starting the Cryostat test instance before any of the integration test suite classes execute, and for tearing down the test pod after the whole suite is completed. This is nice because it minimizes the amount of time spent waiting for Cryostat instance startups during testing - it just has to happen once. However, the downside is that all integration tests need to absolutely always clean up fully after themselves, even if they fail or complete abnormally, otherwise their lingering test resources are highly likely to interfere with the expectations and results of other tests and lead to cascading failures that can be difficult to diagnose.
One plain and simple way to solve this problem is to not use the failsafe runner for starting/stopping the Cryostat instance under test. We already have some test code infrastructure within the itest utils for starting/stopping containers by calling out to Podman by a process exec, which we use for testing things like target JVM discovery or interleaved requests to multiple targets. This same test infra could be used to actually start up the Cryostat test instance. Each integration test class could then be responsible for starting up its own Cryostat instance in a
@BeforeAll
and for stopping it in an@AfterAll
. This would guarantee that every integration test is fully isolated from all the others, and would simplify writing the tests because cleanup actually would not be required at all (other than any needed in between tests of the same class, but this would probably be best avoided because it comes with some of the same problems we're trying to solve here). Then there should be no possibility of cascading itest failures. The downside is that the whole itest suite would take significantly longer to execute, since we have to wait for n Cryostat startup/teardowns instead of just 1. IMO that's a worthwhile tradeoff.The text was updated successfully, but these errors were encountered: