Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for o1-like reasoning model (LRMs) #8760

Open
LastRemote opened this issue Jan 22, 2025 · 4 comments
Open

Support for o1-like reasoning model (LRMs) #8760

LastRemote opened this issue Jan 22, 2025 · 4 comments

Comments

@LastRemote
Copy link
Contributor

It seems like we will need separation of reasoning content and the actual text completions to better manage multi-round conversations with reasoning (for example: https://api-docs.deepseek.com/guides/reasoning_model). This may have impact on the current structure and functionality of ChatMessage, StreamingChunk and generators.

My current purposal is to add a new boolean flag or type in both TextContent and StreamingChunk to indicate if this is a part of the reasoning steps. ChatMessage.text should point to the first non-reasoning text content, and we will need to add a new property for ChatMessage.reasoning.

For example, this is how the streaming chunks will be like from a reasoning model:

StreamingChunk(content: <reasoning-delta1>, is_reasoning: true)
StreamingChunk(content: <reasoning-delta2>, is_reasoning: true)
StreamingChunk(content: <completion-delta1>, is_reasoning: false)
StreamingChunk(content: <completion-delta2>, is_reasoning: false)

And user can access the reasoning and completions part using chat_message.reasoning[s] and chat_message.text[s] respectively from the generator output.

The other option is to have a separate reasoning_content field in StreamingChunk and ReasoningContent class in ChatMessage._contents. This is more aligned with the current deepseek-reasoner API but I feel like it is slightly overcomplicated. But I am not exactly sure whether both reasoning_content and content can appear in one SSE chunk.

I did some research today but there are few reasoning models/APIs available to reach a consensus on what reasoning should be like. I feel like it is probably better to start a discussion thread somewhere and explore the options.

@vblagoje
Copy link
Member

vblagoje commented Jan 22, 2025

@LastRemote have you used o1 via API? My hunch is to align with their abstractions as others are likely to follow.
I played yesterday a bit with https://fireworks.ai/models/fireworks/deepseek-r1 and it seems like deepsek-r1 is using <thinking> tags before the actual output. But I'll speak more once I try it via API. I agree with you that it is important to nail this right.

@LastRemote
Copy link
Contributor Author

have you used o1 via API?

Unfortunately no, I do not have access. But according to the documentation, o1 does not support streaming, and the reasoning tokens are not visible in its response at the moment. Deepseek-R1 is probably the groundbreaker here.

Deepseek-r1 model uses special tokens around the CoT content in its raw response. Their API, however, handles it properly in the new reasoning_content field, which I believe it is a good move since different models are definitely going to use different special tokens for reasoning.

@vblagoje
Copy link
Member

Right now, fireworks puts everything into response.choices[0].message.content with <think> part coming first before regular response.

{'replies': [ChatMessage(_role=<ChatRole.ASSISTANT: 'assistant'>, _content=[TextContent(text="<think>\nFirst, I need to compare the two numbers: 9.11 and 9.8. \n\nTo make an accurate comparison, I should ensure both numbers have the same number of decimal places. Currently, 9.11 has two decimal places, while 9.8 has only one. To align them, I'll convert 9.8 to 9.80.\n\nNow both numbers are: 9.11 and 9.80. \n\nNext, I'll compare the whole number parts of both numbers. In 9.11, the whole number is 9, and in 9.80, the whole number is also 9. Since the whole numbers are equal, I'll move to the decimal parts.\n\nIn 9.11, the decimal part is 0.11. In 9.80, the decimal part is 0.80. \n\nComparing 0.11 and 0.80, it's clear that 0.80 is greater than 0.11. Therefore, 9.80 (which is 9.8) is greater than 9.11.\n</think>\n\nTo determine which number is greater between \\(9.11\\) and \\(9.8\\), follow these steps:\n\n1. **Equalize Decimal Places:**\n   - Convert \\(9.8\\) to have two decimal places: \\(9.80\\)\n\n2. **Compare Whole Numbers:**\n   - Both numbers have the same whole number part: **9**\n\n3. **Compare Decimal Parts:**\n   - **0.11** (from \\(9.11\\)) vs. **0.80** (from \\(9.80\\))\n   - Since \\(0.80 > 0.11\\), \\(9.80\\) is greater than \\(9.11\\).\n\n**Final Answer:** \\(\\boxed{9.8}\\)")], _name=None, _meta={'model': 'accounts/fireworks/models/deepseek-r1', 'index': 0, 'finish_reason': 'stop', 'usage': {'completion_tokens': 382, 'prompt_tokens': 16, 'total_tokens': 398, 'completion_tokens_details': None, 'prompt_tokens_details': None}})]}

@marcklingen
Copy link

marcklingen commented Jan 22, 2025

Chiming in here as there is no native rendering for reasoning tokens yet in langfuse (follow up to this comment by @LastRemote

I have not yet seen a stable schema on how reasoning tokens are included in the api response as openai does not return them. Would love to learn from this thread and add better support for it in langfuse as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants