Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrating AnythingLLM OpenAI API fails #25

Open
nesretep-anp1 opened this issue Jan 9, 2025 · 2 comments
Open

Integrating AnythingLLM OpenAI API fails #25

nesretep-anp1 opened this issue Jan 9, 2025 · 2 comments

Comments

@nesretep-anp1
Copy link

Integrating AnythingLLM OpenAI API chat completion results in having the evaluation of the response fail.

The responses do not contain the following fields:

.choices[i].index
.usage

Both seem to be part of the evaluation.

The integration of AnythingLLM was made using the "openai-compatible" provider.

@spantaleev
Copy link
Contributor

There isn't much information online about the AnythingLLM AP and its compatibility with OpenAI's API.

Perhaps you should open an issue with them so they can improve their OpenAI API compatibility?

@nesretep-anp1
Copy link
Author

You're right; the swagger files are on prem servers after installation, there is no public access to them. Weird.

But, ... here some short info's ...

The endpoint is ... /api/v1/openai/chat/completion which is (input) fully compatible to OpenAI's API.

But, ... like said above, ... the response is missing ...

.choices[i].index
.usage

Example:

{"id":"db9f23b8-70da-41b0-a291-1a8df2769535","object":"chat.completion","created":1736423701225,"model":"sandbox","choices":[{"message":{"role":"assistant","content":"Hello!"},"logprobs":null,"finish_reason":"stop"}]}

Theoretically this response would be usable for baibot, ... but the appropriate calls fail with index field missing and usage field missing.

You're absolutely right, that this should be fixed by AnythingLLM and I - certainly - already had submitted an apprpriate report to them too.

Additionally I created a workaround for us with a webhook on our n8n instance which - after getting the response from AnythingLLM - added these fields with 0 and {}.

But on the one hand that is no really nice solution. On the other hand, I think there will be more users thinking like me that using the "openai-compatible" provider would do/use things not as strict as they should be and are not able to integrate additional models into their bot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants