-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about the reproduction and testing process #61
Comments
Hi @Warrior456, thanks for your interest in our work. I just ran a test using the newest released code on my machine, and the scores matched precisely with our paper reported ones. See below I reckoned some minor differences among different versions of some Python packages might cause this. Hence, I added a requirements_w_version.txt for your reference. I would appreciate your reply if matching the packages' version helps get the correct scores. Besides, regarding why the test ended earlier at |
Thank you very much for your reply! Now I've managed to reproduce it. I noticed that you have a previous work, i.e., matchnerf, I would like to run the matchnerf on the re10k and acid datasets, do you have any suggestions for that. 1)Do you think matchnerf or mvsnerf can get good results on these wide-baseline datasets? 2)Is the depth range of acid and re10k 1~100, and then sample the depth candidate in this depth range? 3)Could you tell me how long it takes to train the mvsplat? Anticipating for your earliest reply! Thanks in advance! |
Hi @Warrior456, glad to know that my previous suggestion helped.
|
Hi @donydchen , I have a question regarding monitoring the training process. I would like to track the training progress, similar to how it's done in TensorBoard. However, during my training, I noticed that the files in the output directory do not seem to contain training information. In particular, the *.log files are empty. Is there a specific configuration I need to enable? Thanks in advance ! |
Hi, thanks for sharing this amazing work.
I used the pre-trained model provided for evaluation.
For re10k, the metric is lower than that in the paper, and I don't know why it will end early (end the test at
6474/7286
).For ACID, I got metrics similar to those in the paper, but the testing process also ended early.
Looking forward to your reply
The text was updated successfully, but these errors were encountered: