Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NaN id loss while training your custom dataset #546

Open
yangkaizhang opened this issue Sep 15, 2023 · 2 comments
Open

NaN id loss while training your custom dataset #546

yangkaizhang opened this issue Sep 15, 2023 · 2 comments

Comments

@yangkaizhang
Copy link

just cheek your labels files in "./labels_with_ids".Some custom datasets list their anns in sequeence of frame_id instead of track_id ,which is inappropriate for origin "gen_labels_[ ].py(s)" in this program(especially those modified MOT format) and result in incorrect id_indexs after convert(normally too large).

@ZWL706
Copy link

ZWL706 commented Sep 23, 2023

你好,我想请问下你是否有使用过"ETHZ"数据集进行训练,我发现虽然也出现了id_loss=nan,但是原因似乎并不是跟你的一样,而是由于作者使用交叉熵损失self.IDLoss = nn.CrossEntropyLoss(ignore_index=-1)这个设置意味着所有标签为-1的条目在计算损失时都会被忽略。

@yangkaizhang
Copy link
Author

你好,我想请问下你是否有使用过"ETHZ"数据集进行训练,我发现虽然也出现了id_loss=nan,但是原因似乎并不是跟你的一样,而是由于作者使用交叉熵损失self.IDLoss = nn.CrossEntropyLoss(ignore_index=-1)这个设置意味着所有标签为-1的条目在计算损失时都会被忽略。

对ReID数据集,在训练时可以不加入id损失;我的repo只适用错误生成的不匹配的标签导致id loss在特定epoch中发生梯度爆炸

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants