We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,
Thanks for your good work!
Can you clarify what is learning rate,bsz and epochs for baseline LoRA experiments among different datasets
Kind regards,
Jason
The text was updated successfully, but these errors were encountered:
Thank you for your interest in our work! The hyper-parameters of LoRA is listed in the following:
And the seed list is {0, 21, 42, 81, 100}, the batch_size is 8.
I hope my response helps you.
Sorry, something went wrong.
Hi, I was wondering if you use the same learning rate for all the rank settings. Looking forward to your help :)
No branches or pull requests
Hi,
Thanks for your good work!
Can you clarify what is learning rate,bsz and epochs for baseline LoRA experiments among different datasets
Kind regards,
Jason
The text was updated successfully, but these errors were encountered: