You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
User is reporting that OpenAI's o1-mini are flaky and inconsistently return this error:
"error\": {
\"message\": \"Invalid prompt: your prompt was flagged as potentially violating our usage policy. Please try again with a different prompt: https://platform.openai.com/docs/guides/reasoning#advice-on-prompting\",
\"type\": \"invalid_request_error\",
\"param\": null,
\"code\": \"invalid_prompt\"
}
This is not an error that BAML can retry today. The implementation of our retry policies is entirely built around API availability ie we move forward along the retry strategy if and only if api.openai.com is hard down. We don't retry application errors, because usually that implies the BAML user's request is somehow malformed and retries won't make a difference.
But if reasoning model guardrails are going to cause flakiness on these calls, we need to add retries for this, and possibly expose the ability to define retry conditions to the user.
User is reporting that OpenAI's o1-mini are flaky and inconsistently return this error:
This is not an error that BAML can retry today. The implementation of our retry policies is entirely built around API availability ie we move forward along the retry strategy if and only if api.openai.com is hard down. We don't retry application errors, because usually that implies the BAML user's request is somehow malformed and retries won't make a difference.
But if reasoning model guardrails are going to cause flakiness on these calls, we need to add retries for this, and possibly expose the ability to define retry conditions to the user.
Original Discord thread: https://discord.com/channels/1119368998161752075/1253172394345107466/1336810187788386347
The text was updated successfully, but these errors were encountered: