-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: Can't find 'adapter_config.json' at './results1/checkpoints/0000500' #178
Comments
I think this error is due to an incorrect model path './results1/checkpoints/0000500'. Please ensure you have entered the correct path. |
the path is correct. I think there are no adapter_config.json in checkpoints. how to solve this? thanks!!
…------------------ 原始邮件 ------------------
发件人: "Shitao ***@***.***>;
发送时间: 2025年1月6日(星期一) 晚上7:27
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [VectorSpaceLab/OmniGen] ValueError: Can't find 'adapter_config.json' at './results1/checkpoints/0000500' (Issue #178)
I think this error is due to an incorrect model path './results1/checkpoints/0000500'. Please ensure you have entered the correct path.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
@hebz11 , could you show what files are in that directory? |
Hi, Shitao Xiao, the directory have 5 files but not have adapter_config.json
…------------------ 原始邮件 ------------------
发件人: "Shitao ***@***.***>;
发送时间: 2025年1月7日(星期二) 下午4:26
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [VectorSpaceLab/OmniGen] ValueError: Can't find 'adapter_config.json' at './results1/checkpoints/0000500' (Issue #178)
@hebz11 , could you show what files are in that directory? ./results1/checkpoints/0000500
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
What are these five files? |
@hebz11 , I think that you are fine-tuning the entire model, rather than lora fine-tuning. You should load the ckpt following: https://github.com/VectorSpaceLab/OmniGen/blob/main/docs/fine-tuning.md#inference |
Dear Author, I use your train.py
but when I done this. and create a new .py to test the model.
root@autodl-container-b3ec4da47b-21563288:~/autodl-tmp/OmniGen# python3 u.py
Model not found, downloading...
Downloaded model to /root/.cache/huggingface/hub/models--Shitao--OmniGen-v1/snapshots/58e249c7c7634423c0ba41c34a774af79aa87889
Loading safetensors
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.12/site-packages/peft/config.py", line 197, in _get_peft_type
config_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
validate_repo_id(arg_value)
File "/root/miniconda3/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './results1/checkpoints/0000500'. Use
repo_type
argument if needed.During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/autodl-tmp/OmniGen/u.py", line 4, in
pipe.merge_lora("./results1/checkpoints/0000500") # e.g., ./results/toy_finetune_lora/checkpoints/0000200
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/autodl-tmp/OmniGen/OmniGen/pipeline.py", line 98, in merge_lora
model = PeftModel.from_pretrained(self.model, lora_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/peft_model.py", line 325, in from_pretrained
PeftConfig._get_peft_type(
File "/root/miniconda3/lib/python3.12/site-packages/peft/config.py", line 203, in _get_peft_type
raise ValueError(f"Can't find '{CONFIG_NAME}' at '{model_id}'")
ValueError: Can't find 'adapter_config.json' at './results1/checkpoints/0000500'
The text was updated successfully, but these errors were encountered: