Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any particular reason why training with LPIPS loss? #82

Open
jotix16 opened this issue Dec 9, 2024 · 1 comment
Open

Any particular reason why training with LPIPS loss? #82

jotix16 opened this issue Dec 9, 2024 · 1 comment

Comments

@jotix16
Copy link

jotix16 commented Dec 9, 2024

Thank you for the nice work :)

I was unable to extract the reason why the LPIPS loss is used during training.

It clearly makes sense with regard to the evaluation pipeline (i.e. directly optimize what you are evaluating for).

Could you tell whether LPIPS is necessary? Is it motivated empirically or just used?

Why not use the default loss ( l1-loss + l_ssim) that is used in the 3DGS seminal work?

@donydchen
Copy link
Owner

donydchen commented Dec 19, 2024

Hi, @jotix16. Thanks for your appreciation.

It seems like a general approach to use LPIPS loss in the feed-forward setting. I remember the visual quality would be a bit worse, and the quantitative scores would drop slightly, if the LPIPS loss was removed. You can verify this by re-training this project and setting the LPIPS weight to 0 at

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants