Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to find few-shot to image dataset #176

Open
lucky-liuzhihong opened this issue Jan 3, 2025 · 4 comments
Open

Failed to find few-shot to image dataset #176

lucky-liuzhihong opened this issue Jan 3, 2025 · 4 comments

Comments

@lucky-liuzhihong
Copy link

Thank you very much for open sourcing the X2I Dataset! I have been going through your paper and noticed that in Section 3.3, you mention the creation of a "Few-shot to Image" dataset. However, when I check the X2I incontext learning dataset on huggingface, I could only find datasets like ADE, GoPro, and Derain, among others but failed to locate the "Few-shot to Image" dataset mentioned in the paper.
Could you kindly clarify where I can access the "Few-shot to Image" dataset? Or perhaps, I might have misunderstood something.

@yuezewang
Copy link
Collaborator

yuezewang commented Jan 3, 2025

Thank you very much for open sourcing the X2I Dataset! I have been going through your paper and noticed that in Section 3.3, you mention the creation of a "Few-shot to Image" dataset. However, when I check the X2I incontext learning dataset on huggingface, I could only find datasets like ADE, GoPro, and Derain, among others but failed to locate the "Few-shot to Image" dataset mentioned in the paper. Could you kindly clarify where I can access the "Few-shot to Image" dataset? Or perhaps, I might have misunderstood something.

Hi, thank you for your attention, X2I-in-context-learning is exactly what you are looking for. And I will mark 'Few-shot to Image' on this ropo.
In fact, few-shot learning is one form of in-context learning, similar to one-shot learning. Therefore, I used a more macroscopic naming convention.

@lucky-liuzhihong
Copy link
Author

Thank you for your response! However, I am still a bit confused by a statement in the paper section 3.3: "Due to limitations in training, we opted to use only one example to improve training efficiency."

My understanding of in-context learning (or few-shot learning) is that it doesn't require additional training, but instead relies on providing an example within the input to guide the model's output. Could you please clarify why the term "training" is used here in relation to in-context learning?

Additionally, I noticed that in the in-context learning dataset, such as the Enhance dataset, each data point seems to consist of a source_image + edited_image. Does this imply that we need to manually construct the in-context learning input as described in Section 3.3?

I may be misunderstanding something, and I would appreciate any clarification. Thank you in advance!

@staoxiao
Copy link
Contributor

staoxiao commented Jan 6, 2025

Hi, @lucky-liuzhihong, the in-context learning dataset is used to teach the model this input format: predicting outputs based on input examples. When encountering new data, you can input data samples without needing to retrain.

@lucky-liuzhihong
Copy link
Author

Thank you for your reply! I understand now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants