Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about the reproduction and testing process #61

Open
Warrior456 opened this issue Sep 1, 2024 · 4 comments
Open

Some questions about the reproduction and testing process #61

Warrior456 opened this issue Sep 1, 2024 · 4 comments

Comments

@Warrior456
Copy link

Hi, thanks for sharing this amazing work.

I used the pre-trained model provided for evaluation.

For re10k, the metric is lower than that in the paper, and I don't know why it will end early (end the test at 6474/7286).

image

For ACID, I got metrics similar to those in the paper, but the testing process also ended early.
image

Looking forward to your reply

@donydchen
Copy link
Owner

Hi @Warrior456, thanks for your interest in our work.

I just ran a test using the newest released code on my machine, and the scores matched precisely with our paper reported ones. See below

re10k results

I reckoned some minor differences among different versions of some Python packages might cause this. Hence, I added a requirements_w_version.txt for your reference. I would appreciate your reply if matching the packages' version helps get the correct scores.

Besides, regarding why the test ended earlier at 6474/7286, this is as expected and nothing to be worried about. In short, this is because some scenes are skipped, more details can be found at #60 (comment).

@Warrior456
Copy link
Author

Thank you very much for your reply! Now I've managed to reproduce it.

I noticed that you have a previous work, i.e., matchnerf, I would like to run the matchnerf on the re10k and acid datasets, do you have any suggestions for that.

1)Do you think matchnerf or mvsnerf can get good results on these wide-baseline datasets?

2)Is the depth range of acid and re10k 1~100, and then sample the depth candidate in this depth range?

3)Could you tell me how long it takes to train the mvsplat?

Anticipating for your earliest reply! Thanks in advance!

@donydchen
Copy link
Owner

Hi @Warrior456, glad to know that my previous suggestion helped.

  • I feel MVSNeRF (Anpei Chen et al. ICCV21) might struggle with wide-baseline data since it relies on one single cost volume built upon one randomly selected source view. In this case, the quality of the rendered novel views depends on how the source and reference views are selected and how far away the target views differ from the input ones, which are both challenging regarding wide-baseline inputs. On the other hand, MatchNeRF might work better as it relies on feature matching, which essentially resembles a camera frustum volume built on the target view (not the source view). But still, in the extreme case when the wide baseline inputs contain very limited overlap, MatchNeRF will fail to find any meaningful matching information, leading to unsatisfactory outputs. Overall, both MVSNeRF and MatchNeRF are reasonable comparison methods, and you can try them to see how the results go.

  • Correct. We set the depth range as 1~100 for both RE10K and ACID, and more discussions can be found at Custom dataset training #11 (comment)

  • With the default settings (batch size: 14, training step: 300K), training our MVSplat takes around 5.5 days on a single 80G A100 GPU.

@Rashfu
Copy link

Rashfu commented Sep 15, 2024

Hi @donydchen , I have a question regarding monitoring the training process. I would like to track the training progress, similar to how it's done in TensorBoard. However, during my training, I noticed that the files in the output directory do not seem to contain training information. In particular, the *.log files are empty. Is there a specific configuration I need to enable?

Thanks in advance !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants