-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request]: Add support for returning prompt and completion token count #2243
Comments
Hi, You can count the tokens yourself in Python by importing a tokenizer and using it to count the tokens. However, this isn’t the most reliable solution.
A better way is to use the "usage data" included in the REST API responses from Azure OpenAI/OpenAI. The usage information is already part of the chat completion response. You can find details here: Look for this section:
If the teams-ai library exposed this data for you to access, similar to how you access messages, it would be more convenient. What language are you planning to use? |
In OpenAIModel.js file |
In my case, I was looking to use this with Python. I'm trying to calculate the number of tokens used myself, but there's no single point that we can get all messages, plus prompt, plus all sources used. The best way would be indeed having the full response of the OpenAI API bubbled up. |
Scenario
Hi, We want to track the tokens used on prompt and completion via app developed with Teams AI.
Solution
Prompt tokens:
Below code can help us get the prompt text
Similarly, the context should return tokens used by input prompt.
Completion tokens:
Not sure, if there is any option or method available to get the completion text via code?
Similarly there should be an option to get the completion tokens.
Thank you
Additional Context
No response
The text was updated successfully, but these errors were encountered: