-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for o1-like reasoning model (LRMs) #8760
Comments
@LastRemote have you used o1 via API? My hunch is to align with their abstractions as others are likely to follow. |
Unfortunately no, I do not have access. But according to the documentation, o1 does not support streaming, and the reasoning tokens are not visible in its response at the moment. Deepseek-R1 is probably the groundbreaker here. Deepseek-r1 model uses special tokens around the CoT content in its raw response. Their API, however, handles it properly in the new |
Right now, fireworks puts everything into
|
Chiming in here as there is no native rendering for reasoning tokens yet in langfuse (follow up to this comment by @LastRemote I have not yet seen a stable schema on how reasoning tokens are included in the api response as openai does not return them. Would love to learn from this thread and add better support for it in langfuse as well. |
It seems like we will need separation of reasoning content and the actual text completions to better manage multi-round conversations with reasoning (for example: https://api-docs.deepseek.com/guides/reasoning_model). This may have impact on the current structure and functionality of
ChatMessage
,StreamingChunk
and generators.My current purposal is to add a new boolean flag or type in both
TextContent
andStreamingChunk
to indicate if this is a part of the reasoning steps.ChatMessage.text
should point to the first non-reasoning text content, and we will need to add a new property forChatMessage.reasoning
.For example, this is how the streaming chunks will be like from a reasoning model:
And user can access the reasoning and completions part using
chat_message.reasoning[s]
andchat_message.text[s]
respectively from the generator output.The other option is to have a separate
reasoning_content
field inStreamingChunk
andReasoningContent
class inChatMessage._contents
. This is more aligned with the current deepseek-reasoner API but I feel like it is slightly overcomplicated. But I am not exactly sure whether bothreasoning_content
andcontent
can appear in one SSE chunk.I did some research today but there are few reasoning models/APIs available to reach a consensus on what reasoning should be like. I feel like it is probably better to start a discussion thread somewhere and explore the options.
The text was updated successfully, but these errors were encountered: