We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Follow the guide here: https://github.com/intel/ai-reference-models/tree/main/models_v2/pytorch/llama/training/cpu, faced several issues:
https://github.com/intel/ai-reference-models/blob/main/models_v2/pytorch/llama/training/cpu/finetune.py#L36
https://github.com/intel/ai-reference-models/blob/main/models_v2/pytorch/llama/training/cpu/finetune.py#L281
Run finetune.py on Xeon but failed - no attribute "weight" intel-extension-for-pytorch#701
The text was updated successfully, but these errors were encountered:
It's suspicious that it's due to a modeling file conflict among different transformers versions. IPEX dev is on the issue.
Sorry, something went wrong.
Thanks for the fix, it works now except below warning which can be fixed following huggingface/transformers#29278
No branches or pull requests
Follow the guide here: https://github.com/intel/ai-reference-models/tree/main/models_v2/pytorch/llama/training/cpu, faced several issues:
https://github.com/intel/ai-reference-models/blob/main/models_v2/pytorch/llama/training/cpu/finetune.py#L36
https://github.com/intel/ai-reference-models/blob/main/models_v2/pytorch/llama/training/cpu/finetune.py#L281
Run finetune.py on Xeon but failed - no attribute "weight" intel-extension-for-pytorch#701
The text was updated successfully, but these errors were encountered: