You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
https://github.com/OpenGVLab/LLaMA-Adapter
From the above, I understood the model uses Image-Text-V1 during pretraining and GPT4LLM, LLaVA, and VQAv2 during fine-tuning.
But how can I get the data?
I really appreciate any help you can provide.
The text was updated successfully, but these errors were encountered:
yeonju7kim
changed the title
I don't know which data to use and how to reproduce the model llama-adapter-multimodal-v2.
I don't know which data to use to reproduce the model llama-adapter-multimodal-v2.
Nov 25, 2023
Thanks for your excellent work!
https://github.com/OpenGVLab/LLaMA-Adapter
From the above, I understood the model uses Image-Text-V1 during pretraining and GPT4LLM, LLaVA, and VQAv2 during fine-tuning.
But how can I get the data?
I really appreciate any help you can provide.
The text was updated successfully, but these errors were encountered: