-
-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate performance of TRSS #3354
Comments
FYI @Haroon-Khel since you dealt with #3335
I'd quite like to try a complete shutdown docker service if we can schedule a time to do that to see if that frees up the "lost" space. Let me know when would be appropriate. |
Issues
Followed that up with an explicit |
OK ... So it looks like the mongodb container has a log file (outside the container on the host but under `/var/lib/docker/containers/<mongo_container_uuid>. This was 47GB and filling up the docker file system. Unclear if it's directly related but I've gzipped it, shut down the server and restarted it. The log file is still increasing in size so hopefully we'll be able to get some attention on this from the TRSS developers to understand the problem before it causes a problem again. I have pinged @llxia for advice in the AQA slack channel. I have started the processes under nohup, and the output from @smlambert Can you confirm if the performance issues you have described are now resolved by restarting the docker subsystem? |
We should look at what level the DB profiler is running at. The mongoDB documentation says the profiler is off by default, but should verify, and ensure it is started with it off or at level 1 with a filter to ensure it is only moderately active. |
@Haroon-Khel Can I ask you to take a look at this please, since you've done more work on the setup and configuration of this server under #trss? |
Seeing as we are nearing the release, I propose we should increase the number of cores on the machine to see if this helps. It's not a definitive solution, but one that may improve performance, and thereby will help with triage etc |
Are we seeing high CPU load on the server at the moment that will be alleviated by this? The throttling I put in place at the nginx level should have eliminated some of the problems with the client requests. Fixes have gone into TRSS to resolve that now, although they have not been deployed on our server yet. Ref: |
They have been deployed on our server by my manual running of the sync job on the server (related: adoptium/aqa-test-tools#856 (comment)). Noting that the rate-limiting could/should now be adjusted. |
Performance has improved by the several ways linked in issues above. Last thing to do is to get the synch script running regularly which is tracked under adoptium/aqa-test-tools#856 |
As per adoptium/temurin#13 (comment), the performance of TRSS was sub-par in the January release. It needs investigation, as it has degraded than prior to release period (too slow to fill out results info on release pipelines to be useful during release).
Also very slow for main page to load and to generate release summary report.
The text was updated successfully, but these errors were encountered: