Skip to content

openAI: model selection for o1 family falling back to gpt-4o #5357

Discussion options

You must be logged in to vote

What you might be seeing is the title generation completion, which uses gpt-4o-mini after the first message for a new chat.

If you are not seeing the icon for the LLM turn from black to purple, then likely a fallback isn't happening on LibreChat's end.

To be absolutely sure what the completion was generated with, you should check your debug logs. They are found in ./logs at project root if using docker, or ./api/logs if using npm.

There is only a switch/fallback mechanism when you have images attached, because o1-mini and o1-preview are not vision models, and the default behavior of LC is to fallback to a vision model when images are attached.


Debug log:

2025-01-18T15:09:38.406Z debug: …

Replies: 2 comments

Comment options

You must be logged in to vote
0 replies
Answer selected by simon-77
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #5356 on January 18, 2025 15:08.