-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update tasks and 🛳️ mask-generation and zero-shot-object-detection #462
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking nice!
- [Training CodeParrot 🦜 from Scratch](https://huggingface.co/blog/codeparrot) | ||
- [How to generate text: using different decoding methods for language generation with Transformers](https://huggingface.co/blog/how-to-generate) | ||
### Model Inference & Deployment | ||
- [Optimizing your LLM in production](https://huggingface.co/blog/optimize-llm) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's leave a new line between headings and lists, as some editors won't work well otherwise
- [Preference Tuning LLMs with Direct Preference Optimization Methods](https://huggingface.co/blog/pref-tuning) | ||
- [Fine-tune Llama 2 with DPO](https://huggingface.co/blog/dpo-trl) | ||
|
||
### Hugging Face Model Releases |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would not have a section about HF-specific models (also IDEFICS is multimodal)
Co-authored-by: Omar Sanseviero <[email protected]>
Co-authored-by: Omar Sanseviero <[email protected]>
I will lint after the last review to the content |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I focused on details mostly, but it looks good!
image_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png" | ||
outputs = generator(image_url, points_per_batch = 256) | ||
outputs = generator(image_url, points_per_batch = 64) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to repeat points_per_batch
here?
outputs = generator(image_url, points_per_batch = 256) | ||
outputs = generator(image_url, points_per_batch = 64) | ||
outputs["masks"] | ||
# array of multiple binary masks returned for each generated mask |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be possible to show them (or a few)?
- [Finetune Stable Diffusion Models with DDPO via TRL](https://huggingface.co/blog/pref-tuning) | ||
- [LoRA training scripts of the world, unite!](https://huggingface.co/blog/sdxl_lora_advanced_script) | ||
- [Using LoRA for Efficient Stable Diffusion Fine-Tuning](https://huggingface.co/blog/lora) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd put them exactly in reverse order lol. Also https://huggingface.co/blog/annotated-diffusion is a great, great resource (and not only for fine-tuning).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They weren't in particular order honestly
@@ -61,3 +61,5 @@ await inference.textToSpeech({ | |||
- [An introduction to SpeechT5, a multi-purpose speech recognition and synthesis model](https://huggingface.co/blog/speecht5). | |||
- [A guide on Fine-tuning Whisper For Multilingual ASR with 🤗Transformers](https://huggingface.co/blog/fine-tune-whisper) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not tts, is it?
@@ -61,3 +61,5 @@ await inference.textToSpeech({ | |||
- [An introduction to SpeechT5, a multi-purpose speech recognition and synthesis model](https://huggingface.co/blog/speecht5). | |||
- [A guide on Fine-tuning Whisper For Multilingual ASR with 🤗Transformers](https://huggingface.co/blog/fine-tune-whisper) | |||
- [Speech Synthesis, Recognition, and More With SpeechT5](https://huggingface.co/blog/speecht5) | |||
- [Optimizing a Text-To-Speech model using 🤗 Transformers](https://huggingface.co/blog/optimizing-bark) | |||
- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @Vaibhavs10 in case there are some obvious tts resources we can use here :)
Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
This is a follow-up to my previous PR to add resources to various tasks and ship mask-generation and zero-shot-object-detection