-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add o1 to model #372
base: main
Are you sure you want to change the base?
add o1 to model #372
Conversation
🦋 Changeset detectedLatest commit: d63c666 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generally looks good, just one comment on code style to avoid delete
. e2e test is failing for everyone rn, most likely unrelated to this PR. looking into what's going on; will approve/merge once complete
@@ -100,9 +100,19 @@ export class OpenAIClient extends LLMClient { | |||
}); | |||
} | |||
} | |||
|
|||
if (this.modelName === "o1") { | |||
delete options.temperature; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we instead do something like:
{ temperature, top_p, frequency_penalty, presence_penalty, ...options } = options
then we can log each as being removed since model is o1
@@ -0,0 +1,33 @@ | |||
import { test, expect } from "@playwright/test"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this isn't a deterministic
test, this directory is for e2e tests that don't leverage AI at all. this would be a better fit going in evals/tasks
, and adding the combination
label in evals.config.json
why
what changed
add o1 to list of supported models
test plan
tested using example