You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have successfully projected the mesh (or pseudo point cloud) of Thuman2 onto a 2D image using these parameters and obtained results aligned with your RGB images. However, when I project the mesh (or pseudo point cloud) of CAPE using the same parameters, the results do not match the RGB images directly downloaded from cape_3views.
So, I would like to ask what parameters you used when rendering CAPE dataset?
Looking forward to your reply. Thank you!
The text was updated successfully, but these errors were encountered:
Hello, thank you very much for your work. I have a question about the rendering process of the CAPE dataset.
I noticed that there is a section in ICON for processing Thuman2 data (in render_batch.py). In the code, there are camera parameters:
At the same time, there is code to calculate the scale factor:
I have successfully projected the mesh (or pseudo point cloud) of Thuman2 onto a 2D image using these parameters and obtained results aligned with your RGB images. However, when I project the mesh (or pseudo point cloud) of CAPE using the same parameters, the results do not match the RGB images directly downloaded from cape_3views.
So, I would like to ask what parameters you used when rendering CAPE dataset?
Looking forward to your reply. Thank you!
The text was updated successfully, but these errors were encountered: