You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Expansion of the OpenAI Spec to include text-to-image and other multi-modal models.
Motivation
As a developer, I'd like to host several text-to-image (and other multi-modal models) on my own hardware, and access them via an API that conforms to the OpenAI API standard.
That is, I'd like to easily deploy my model and access it through an API that conforms to the OpenAI standard (as LitServe offers) but with text-to-image and other multi-modal models.
Pitch
See Feature/Motivation.
Alternatives
I've seen a couple of libraries that sort of offer this functionality (like LocalAI), but I want to stick with the simplicity of LitServe.
Additional context
Is this feature planned in the short term? Or not at all?
Or should I be writing a "spec"? I can't seem to find much documentation on that. Any insight would be appreciated!
The text was updated successfully, but these errors were encountered:
🚀 Feature
Expansion of the OpenAI Spec to include text-to-image and other multi-modal models.
Motivation
As a developer, I'd like to host several text-to-image (and other multi-modal models) on my own hardware, and access them via an API that conforms to the OpenAI API standard.
That is, I'd like to easily deploy my model and access it through an API that conforms to the OpenAI standard (as LitServe offers) but with text-to-image and other multi-modal models.
Pitch
See Feature/Motivation.
Alternatives
I've seen a couple of libraries that sort of offer this functionality (like LocalAI), but I want to stick with the simplicity of LitServe.
Additional context
Is this feature planned in the short term? Or not at all?
Or should I be writing a "spec"? I can't seem to find much documentation on that. Any insight would be appreciated!
The text was updated successfully, but these errors were encountered: