-
Notifications
You must be signed in to change notification settings - Fork 468
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] qwen2 vl support the turbomind engine #2774
Comments
#2720 is working on it. |
Hi @lvhan028, do you have an estimate of when that PR will be merged? Thank you in advance! |
Not yet. We are trying to refactor the VLM inference in PR #2810 |
Is there any plan to merge this feature? |
#2720 is still open |
Sorry about that. We are occupied by other top-priority features. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Motivation
1、The qwen2vl effect is the sota level in the open source model
2、lmdeploy is an excellent inference framework
3、So it's important to support turbomind
Related resources
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: