Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Cursor] Improve Reasoning Tokens Documentation and Implementation #99

Merged
merged 3 commits into from
Feb 27, 2025

Conversation

BoomMccloud
Copy link

Overview

This PR improves the handling and documentation of reasoning tokens across the codebase, with a focus on clarifying provider-specific behaviors and ensuring proper test coverage.

Key Changes

  1. Documentation Improvements:

    • Added comprehensive docstrings explaining reasoning tokens in TokenUsage class
    • Enhanced query_llm function documentation to clarify provider-specific behaviors
    • Added detailed test docstrings explaining token tracking differences
  2. Implementation Updates:

    • Fixed token tracking for OpenAI's o1 model to properly capture reasoning tokens
    • Ensured non-o1 models explicitly set reasoning_tokens to None
    • Improved token tracking consistency across all providers
  3. Test Coverage:

    • Added specific test cases for o1 model reasoning tokens
    • Enhanced token tracking tests for all providers
    • Verified proper handling of reasoning tokens in non-o1 models

Technical Details

  • Reasoning tokens are an o1-specific feature (OpenAI's most advanced model)
  • All other models (including other OpenAI models) will have reasoning_tokens=None
  • Token tracking behavior varies by provider:
    • OpenAI-style APIs: Full token tracking
    • Anthropic: Has its own token tracking system
    • Gemini: Token tracking not yet implemented

Testing

  • All 21 tests passing
  • Added specific test cases for reasoning tokens
  • Improved test documentation and coverage

Related Issues

None

- Update plan_exec_llm to use query_llm from llm_api
- Remove redundant LLM client creation and token tracking logic
- Add support for multiple LLM providers and models via CLI arguments
- Simplify token usage tracking by leveraging existing infrastructure
- Remove hardcoded OpenAI-specific code to improve provider flexibility
This commit improves the handling and documentation of reasoning tokens across the codebase:

- Added comprehensive docstrings explaining reasoning tokens
- Enhanced query_llm function documentation for provider-specific behaviors
- Fixed token tracking for o1 model and non-o1 models
- Improved test coverage and documentation
- Added CHANGELOG.md to track changes

Key technical details:
- Reasoning tokens are o1-specific (OpenAI's most advanced model)
- All other models have reasoning_tokens=None
- Token tracking behavior varies by provider (OpenAI, Anthropic, Gemini)

Testing:
- All 21 tests passing
- Added specific test cases for reasoning tokens
- Improved test documentation and coverage
@grapeot
Copy link
Owner

grapeot commented Feb 25, 2025

Thank you @BoomMccloud ! I added a comment in the code, and am testing it locally. If you agree with the proposal and make the change, we can then merge it!

@BoomMccloud
Copy link
Author

updated the code, please let me know if there are any other comments

@grapeot grapeot merged commit ce9de75 into grapeot:multi-agent Feb 27, 2025
1 check passed
@grapeot
Copy link
Owner

grapeot commented Feb 27, 2025

Looks good! Merged! Thanks a lot for the PR!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants