Skip to content

Commit

Permalink
Add FAQ section to reasoning
Browse files Browse the repository at this point in the history
  • Loading branch information
Cohee1207 committed Feb 12, 2025
1 parent df36ec0 commit 598b70b
Showing 1 changed file with 4 additions and 0 deletions.
4 changes: 4 additions & 0 deletions Usage/Prompts/reasoning.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@ This information refers to a pre-release feature that is still in development. I

In language models, reasoning (also known as model thinking) refers to a chain-of-thought (CoT) technique that mirrors human problem-solving through step-by-step analysis. SillyTavern provides several features that make the use of reasoning models more efficient and consistent across supported backends.

## Common issues

1. When using reasoning models, the model's internal reasoning process consumes part of your response token allowance, even if this reasoning isn't shown in the final output (e.g. o3-mini or Gemini Thinking). If you notice your responses are coming back incomplete or empty, you should try adjusting the Max Response Length setting found in the **<i class="fa-solid fa-sliders"></i> AI Response Configuration** panel. For reasoning models, it's typical to use significantly higher token limits - anywhere from 1024 to 4096 tokens - compared to standard conversational models.

## Configuration

!!!
Expand Down

0 comments on commit 598b70b

Please sign in to comment.