We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
In https://github.com/tsb0601/MMVP/blob/main/LLaVA/finetune.sh#L20, I notice that --per_device_train_batch_size is 11, however in paper appendix
Table 4. Hyperparameters for MoF training on LLaVA and LLaVA-1.5
. the LLaVA-1.5 stage2 Traing batchsize is 128. Have I misunderstood? It seems that --per_device_train_batch_size should be set to 8.
The text was updated successfully, but these errors were encountered:
Please provide an update on this
Sorry, something went wrong.
No branches or pull requests
In https://github.com/tsb0601/MMVP/blob/main/LLaVA/finetune.sh#L20, I notice that --per_device_train_batch_size is 11, however in paper appendix
. the LLaVA-1.5 stage2 Traing batchsize is 128. Have I misunderstood? It seems that --per_device_train_batch_size should be set to 8.
The text was updated successfully, but these errors were encountered: