We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Official installation package
2.15.2
macOS
14.5
Chrome
128.0.6613.138
Configuring NextChat packaged client to work with Azure makes the response text get cutoff in the chat.
This was not an issue with prior older versions of the client, but this happens to me after I upgraded to 2.15.2
Model Provider: Azure Azure Endpoint: https://{resource-url}/openai Custom Models: -all,{modelname}@Azure={deploymentName} Max Tokens: 4000 Attached Message Count: 5 History Compression Threshold: 5000 Memory Prompt: yes
The entire response should be shown instead of only about 10-15 tokens
No response
The text was updated successfully, but these errors were encountered:
Please follow the issue template to update description of your issue.
Sorry, something went wrong.
Bot detected the issue body's language is not English, translate it automatically.
Title: [Bug]
No branches or pull requests
📦 Deployment Method
Official installation package
📌 Version
2.15.2
💻 Operating System
macOS
📌 System Version
14.5
🌐 Browser
Chrome
📌 Browser Version
128.0.6613.138
🐛 Bug Description
Configuring NextChat packaged client to work with Azure makes the response text get cutoff in the chat.
This was not an issue with prior older versions of the client, but this happens to me after I upgraded to 2.15.2
📷 Recurrence Steps
Model Provider: Azure
Azure Endpoint: https://{resource-url}/openai
Custom Models: -all,{modelname}@Azure={deploymentName}
Max Tokens: 4000
Attached Message Count: 5
History Compression Threshold: 5000
Memory Prompt: yes
🚦 Expected Behavior
The entire response should be shown instead of only about 10-15 tokens
📝 Additional Information
No response
The text was updated successfully, but these errors were encountered: