Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAI Spec Expansion For Multi-Modal Models? #404

Open
norman-kong opened this issue Jan 8, 2025 · 0 comments
Open

OpenAI Spec Expansion For Multi-Modal Models? #404

norman-kong opened this issue Jan 8, 2025 · 0 comments
Labels
enhancement New feature or request

Comments

@norman-kong
Copy link

🚀 Feature

Expansion of the OpenAI Spec to include text-to-image and other multi-modal models.

Motivation

As a developer, I'd like to host several text-to-image (and other multi-modal models) on my own hardware, and access them via an API that conforms to the OpenAI API standard.

That is, I'd like to easily deploy my model and access it through an API that conforms to the OpenAI standard (as LitServe offers) but with text-to-image and other multi-modal models.

Pitch

See Feature/Motivation.

Alternatives

I've seen a couple of libraries that sort of offer this functionality (like LocalAI), but I want to stick with the simplicity of LitServe.

Additional context

Is this feature planned in the short term? Or not at all?

Or should I be writing a "spec"? I can't seem to find much documentation on that. Any insight would be appreciated!

@norman-kong norman-kong added the enhancement New feature or request label Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant