Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pose 图 F_t 到底是怎么渲染的?How do you render the pose image? #11

Open
TZYSJTU opened this issue Sep 26, 2024 · 8 comments
Open

Comments

@TZYSJTU
Copy link

TZYSJTU commented Sep 26, 2024

what's the actual pose image F_t you render? Is the first colored image type or the skeleton-like type?
最终渲染出来的pose图是pipeline里这个彩色的,还是后文图里那个骨骼?This really makes me confused!

UZNYC~`IO2VW}VDJ%41U2%0

SQ)21Q~Y 6TZYWUKMPPJ%6A

@TZYSJTU TZYSJTU changed the title Pose 图到底是怎么渲染的?How do you render the pose image? Pose 图 F_t 到底是怎么渲染的?How do you render the pose image? Sep 26, 2024
@Dorniwang
Copy link

我理解这是一个rasterization的过程。至于骨骼点可以绑定smpl然后rasterize成这个特征图

@TZYSJTU
Copy link
Author

TZYSJTU commented Sep 26, 2024

我理解这是一个rasterization的过程。至于骨骼点可以绑定smpl然后rasterize成这个特征图

which one, the former or the latter?

@Dorniwang
Copy link

我理解这是一个rasterization的过程。至于骨骼点可以绑定smpl然后rasterize成这个特征图

which one, the former or the latter?

The rasterization result is the former one, which is shown in the pipeline figure.

@johndpope
Copy link

johndpope commented Sep 26, 2024

these are the most related papers - this one in particular
https://arxiv.org/abs/2408.07481

[DeCo: Decoupled Human-Centered Diffusion Video Editing with Motion Consistency]
[ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis]
[One-Shot Learning Meets Depth Diffusion in Multi-Object Videos]
[AMG: Avatar Motion Guided Video Generation]
[Scene123: One Prompt to 3D Scene Generation via Video-Assisted and Consistency-Enhanced MAE]

I start this repo - it does have some rasterization code - it's using pytorch3d
https://github.com/johndpope/MIMO-hack/blob/main/main.py

the other way is to use mitsuba3 - i started playing around with this the other day here
https://github.com/johndpope/DiPIR-hack

this looks (almost) helpful for smpl-x stuff
https://github.com/RammusLeo/DPMesh

UPDATE
checkout this
https://github.com/zshyang/amg

https://yukun-huang.github.io/DreamWaltz-G/
v1 renderer
https://github.com/IDEA-Research/DreamWaltz/blob/main/core/nerf/renderer.py

@menyifang
Copy link
Owner

我理解这是一个rasterization的过程。至于骨骼点可以绑定smpl然后rasterize成这个特征图

yes, the pose representation is a interpolated feature map via the rasterization, visualized as the former one.

@Dorniwang
Copy link

我理解这是一个rasterization的过程。至于骨骼点可以绑定smpl然后rasterize成这个特征图

yes, the pose representation is a interpolated feature map via the rasterization, visualized as the former one.

I have another question, which pretrained repose model do you use in your work?

@lastsongforu
Copy link

我理解这是一个rasterization的过程。至于骨骼点可以绑定smpl然后rasterize成这个特征图

yes, the pose representation is a interpolated feature map via the rasterization, visualized as the former one.

I guess the dimension of the latent code is not 3 (rgb), but a more big num such as 16 or 32?
and is it learnable?

@Jason-Chi-xx
Copy link

Wondering how to get such 3D skeleton pose. I searched for it in AMASS but could not find it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants