-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducing results with multiple GPUs #14
Comments
Hi @RuijieZhu94, thanks for your interest in our work. Yes, there is a small bug regarding feature extraction due to code cleaning. It is mainly related to Would you mind updating the code following our last commit (297338f) and re-training the model? Let us keep this commit open for you to update the results. For a quicker debugging process, your model should reach around PSNR=23 at step 10K with the updated code, which is around PSNR=20 at step 10K if it contains the aforementioned feature extraction bug. By the way, we use |
Hi Yuedong, thanks for your prompt reply, I will retrain this model in the next few days. |
Hi Yuedong, I retrained this model with bs=12, and got the result:
Thank you for your help. |
@RuijieZhu94 Could you share the link of the training dataset? I reach out the author of the pixelsplat for link. However, i can not open the link. |
Please contact me by email: [email protected]. |
Hi Yuedong, thank you for open source your great work!
When I trained the model using 3 Nvidia RTX 3090s (batch size 4 per GPU), I got significantly worse results on the re10k.
Will fewer batchsize or multi-GPU training significantly affect the performance of the model?
By the way, I use the official weights and can get results consistent with the paper.
The text was updated successfully, but these errors were encountered: