-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to find few-shot to image dataset #176
Comments
Hi, thank you for your attention, X2I-in-context-learning is exactly what you are looking for. And I will mark 'Few-shot to Image' on this ropo. |
Thank you for your response! However, I am still a bit confused by a statement in the paper section 3.3: "Due to limitations in training, we opted to use only one example to improve training efficiency." My understanding of in-context learning (or few-shot learning) is that it doesn't require additional training, but instead relies on providing an example within the input to guide the model's output. Could you please clarify why the term "training" is used here in relation to in-context learning? Additionally, I noticed that in the in-context learning dataset, such as the Enhance dataset, each data point seems to consist of a source_image + edited_image. Does this imply that we need to manually construct the in-context learning input as described in Section 3.3? I may be misunderstanding something, and I would appreciate any clarification. Thank you in advance! |
Hi, @lucky-liuzhihong, the in-context learning dataset is used to teach the model this input format: predicting outputs based on input examples. When encountering new data, you can input data samples without needing to retrain. |
Thank you for your reply! I understand now. |
Thank you very much for open sourcing the X2I Dataset! I have been going through your paper and noticed that in Section 3.3, you mention the creation of a "Few-shot to Image" dataset. However, when I check the X2I incontext learning dataset on huggingface, I could only find datasets like ADE, GoPro, and Derain, among others but failed to locate the "Few-shot to Image" dataset mentioned in the paper.
Could you kindly clarify where I can access the "Few-shot to Image" dataset? Or perhaps, I might have misunderstood something.
The text was updated successfully, but these errors were encountered: