Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About BEVFusion Data #130

Open
mrabiabrn opened this issue Dec 17, 2024 · 5 comments
Open

About BEVFusion Data #130

mrabiabrn opened this issue Dec 17, 2024 · 5 comments
Labels

Comments

@mrabiabrn
Copy link

Thank you for sharing the checkpoint for BevFusion. I want to reproduce CAM-ONLY results from Table 6. I generated data using the 272x736 checkpoint. I dropped 50% of the boxes both from the scene and the ground truth bboxes while generating.

I was wondering how did you fuse the new data to BevFusion? There is a data preparation step as the following: https://github.com/open-mmlab/mmdetection3d/blob/1.0/docs/en/datasets/nuscenes_det.md

How did you provide these for the generated data since we dropped some of the boxes? Can you give details about the dataloading process?

Thank you

@flymin
Copy link
Member

flymin commented Dec 18, 2024

Hi, you can edit the metadata file and append the new infos with the generated images replacing the original ones.

@mrabiabrn
Copy link
Author

mrabiabrn commented Dec 19, 2024

We have infos (which contain ground truth boxes, image paths, etc.) and database pickle files (which contain pointclouds). All I need to do is append my generated set's information to the infos files and not modify the database pickle files. Is this correct? But we are removing some of the objects, should we remove their point cloud data also? I am confused a bit. Thank you for your help.

@flymin
Copy link
Member

flymin commented Dec 20, 2024

We did not modify the database pickle files. I think it should be safe at least for camera-only training. I am not sure about more details for this part.

@mrabiabrn
Copy link
Author

mrabiabrn commented Dec 24, 2024

Thank you I figured out the data part by appending new information to train pickle file.
I have several questions related to other parts:

  • I realized that for BevFusion data generation, you adjust the padding such that the generated images have black parts. Should we directly pass them to BevFusion?
  • Which CFG scale are you using?
  • Are you using the default seed for BevFusion?
  • In Table 6, when reporting 0.5x and 1x epochs, are you reporting the intermediate values from the full training (2x = 20 epochs) or are you training the model for 0.5x (5) epochs and 1x (10) epochs separately. The two differ due to learning rate scheduler.

Thank you in advance.

Copy link

github-actions bot commented Jan 1, 2025

This issue is stale because it has been open for 7 days with no activity. If you do not have any follow-ups, the issue will be closed soon.

@github-actions github-actions bot added the stale label Jan 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants