Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Can't find 'adapter_config.json' at './results1/checkpoints/0000500' #178

Open
hebz11 opened this issue Jan 6, 2025 · 7 comments

Comments

@hebz11
Copy link

hebz11 commented Jan 6, 2025

Dear Author, I use your train.py
but when I done this. and create a new .py to test the model.

root@autodl-container-b3ec4da47b-21563288:~/autodl-tmp/OmniGen# python3 u.py
Model not found, downloading...
Downloaded model to /root/.cache/huggingface/hub/models--Shitao--OmniGen-v1/snapshots/58e249c7c7634423c0ba41c34a774af79aa87889
Loading safetensors
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.12/site-packages/peft/config.py", line 197, in _get_peft_type
config_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
validate_repo_id(arg_value)
File "/root/miniconda3/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './results1/checkpoints/0000500'. Use repo_type argument if needed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/autodl-tmp/OmniGen/u.py", line 4, in
pipe.merge_lora("./results1/checkpoints/0000500") # e.g., ./results/toy_finetune_lora/checkpoints/0000200
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/autodl-tmp/OmniGen/OmniGen/pipeline.py", line 98, in merge_lora
model = PeftModel.from_pretrained(self.model, lora_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/peft_model.py", line 325, in from_pretrained
PeftConfig._get_peft_type(
File "/root/miniconda3/lib/python3.12/site-packages/peft/config.py", line 203, in _get_peft_type
raise ValueError(f"Can't find '{CONFIG_NAME}' at '{model_id}'")
ValueError: Can't find 'adapter_config.json' at './results1/checkpoints/0000500'

@staoxiao
Copy link
Contributor

staoxiao commented Jan 6, 2025

I think this error is due to an incorrect model path './results1/checkpoints/0000500'. Please ensure you have entered the correct path.

@hebz11
Copy link
Author

hebz11 commented Jan 6, 2025 via email

@staoxiao
Copy link
Contributor

staoxiao commented Jan 7, 2025

@hebz11 , could you show what files are in that directory? ./results1/checkpoints/0000500

@hebz11
Copy link
Author

hebz11 commented Jan 7, 2025 via email

@staoxiao
Copy link
Contributor

staoxiao commented Jan 7, 2025

What are these five files?

@hebz11
Copy link
Author

hebz11 commented Jan 7, 2025

Sorry I replied by email, the image can't be loaded on github.
image

@staoxiao
Copy link
Contributor

staoxiao commented Jan 8, 2025

@hebz11 , I think that you are fine-tuning the entire model, rather than lora fine-tuning. You should load the ckpt following: https://github.com/VectorSpaceLab/OmniGen/blob/main/docs/fine-tuning.md#inference

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants