Skip to content

Commit

Permalink
Update sonnet model (BoundaryML#1081)
Browse files Browse the repository at this point in the history
<!-- ELLIPSIS_HIDDEN -->



> [!IMPORTANT]
> Update model version to `claude-3-5-sonnet-20241022` across multiple
files for consistency.
> 
>   - **Model Update**:
> - Update model version from `claude-3-5-sonnet-20240620` to
`claude-3-5-sonnet-20241022` in `anthropic.mdx`, `clients.baml`, and
`inlinedbaml.py`.
> - Update model version in `report.html` and `inlined.rb` to reflect
the new model version.
> - Ensure all references to the model in `inlinedbaml.ts` are updated
to the new version.
> 
> <sup>This description was created by </sup>[<img alt="Ellipsis"
src="https://img.shields.io/badge/Ellipsis-blue?color=175173">](https://www.ellipsis.dev?ref=BoundaryML%2Fbaml&utm_source=github&utm_medium=referral)<sup>
for d930b60. It will automatically
update as commits are pushed.</sup>

<!-- ELLIPSIS_HIDDEN -->
  • Loading branch information
aaronvg authored Oct 22, 2024
1 parent b981604 commit 71df0b7
Show file tree
Hide file tree
Showing 8 changed files with 1,819 additions and 1,486 deletions.
6 changes: 3 additions & 3 deletions docs/docs/snippets/clients/providers/anthropic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Example:
client<llm> MyClient {
provider anthropic
options {
model "claude-3-5-sonnet-20240620"
model "claude-3-5-sonnet-20241022"
temperature 0
}
}
Expand Down Expand Up @@ -59,7 +59,7 @@ client<llm> MyClient {
provider anthropic
options {
api_key env.MY_ANTHROPIC_KEY
model "claude-3-5-sonnet-20240620"
model "claude-3-5-sonnet-20241022"
headers {
"X-My-Header" "my-value"
}
Expand Down Expand Up @@ -101,7 +101,7 @@ client<llm> MyClient {

| Model |
| --- |
| `claude-3-5-sonnet-20240620` |
| `claude-3-5-sonnet-20241022` |
| `claude-3-opus-20240229` |
| `claude-3-sonnet-20240229` |
| `claude-3-haiku-20240307` |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ client<llm> CustomGPT4oMini {
client<llm> CustomSonnet {
provider anthropic
options {
model "claude-3-5-sonnet-20240620"
model "claude-3-5-sonnet-20241022"
api_key env.ANTHROPIC_API_KEY
}
}
Expand Down
2 changes: 1 addition & 1 deletion integ-tests/baml_src/clients.baml
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ client<llm> AwsBedrock {
client<llm> Sonnet {
provider anthropic
options {
model claude-3-5-sonnet-20240620
model claude-3-5-sonnet-20241022
api_key env.ANTHROPIC_API_KEY
}
}
Expand Down
2 changes: 1 addition & 1 deletion integ-tests/python/baml_client/inlinedbaml.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

file_map = {

"clients.baml": "retry_policy Bar {\n max_retries 3\n strategy {\n type exponential_backoff\n }\n}\n\nretry_policy Foo {\n max_retries 3\n strategy {\n type constant_delay\n delay_ms 100\n }\n}\n\nclient<llm> GPT4 {\n provider openai\n options {\n model gpt-4o\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient<llm> GPT4o {\n provider openai\n options {\n model gpt-4o\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient<llm> GPT4Turbo {\n retry_policy Bar\n provider openai\n options {\n model gpt-4-turbo\n api_key env.OPENAI_API_KEY\n }\n} \n\nclient<llm> GPT35 {\n provider openai\n options {\n model \"gpt-3.5-turbo\"\n api_key env.OPENAI_API_KEY\n }\n}\n\nclient<llm> GPT35LegacyProvider {\n provider openai\n options {\n model \"gpt-3.5-turbo\"\n api_key env.OPENAI_API_KEY\n }\n}\n\n\nclient<llm> Ollama {\n provider ollama\n options {\n model llama2\n }\n}\n\nclient<llm> GPT35Azure {\n provider azure-openai\n options {\n resource_name \"west-us-azure-baml\"\n deployment_id \"gpt-35-turbo-default\"\n // base_url \"https://west-us-azure-baml.openai.azure.com/openai/deployments/gpt-35-turbo-default\"\n api_version \"2024-02-01\"\n api_key env.AZURE_OPENAI_API_KEY\n }\n}\n\nclient<llm> Gemini {\n provider google-ai\n options {\n model gemini-1.5-pro-001\n api_key env.GOOGLE_API_KEY\n safetySettings {\n category HARM_CATEGORY_HATE_SPEECH\n threshold BLOCK_LOW_AND_ABOVE\n\n }\n }\n}\n\nclient<llm> Vertex {\n provider vertex-ai \n options {\n model gemini-1.5-pro\n project_id sam-project-vertex-1\n location us-central1\n credentials env.INTEG_TESTS_GOOGLE_APPLICATION_CREDENTIALS\n\n }\n}\n\n\nclient<llm> AwsBedrock {\n provider aws-bedrock\n options {\n inference_configuration {\n max_tokens 100\n }\n model_id \"anthropic.claude-3-haiku-20240307-v1:0\"\n // model_id \"meta.llama3-8b-instruct-v1:0\"\n // model_id \"mistral.mistral-7b-instruct-v0:2\"\n api_key \"\"\n }\n}\n\nclient<llm> Sonnet {\n provider anthropic\n options {\n model claude-3-5-sonnet-20240620\n api_key env.ANTHROPIC_API_KEY\n }\n}\n\nclient<llm> Claude {\n provider anthropic\n options {\n model claude-3-haiku-20240307\n api_key env.ANTHROPIC_API_KEY\n max_tokens 1000\n }\n}\n\nclient<llm> ClaudeWithCaching {\n provider anthropic\n options {\n model claude-3-haiku-20240307\n api_key env.ANTHROPIC_API_KEY\n max_tokens 1000\n allowed_role_metadata [\"cache_control\"]\n headers {\n \"anthropic-beta\" \"prompt-caching-2024-07-31\"\n }\n }\n}\n\nclient<llm> Resilient_SimpleSyntax {\n retry_policy Foo\n provider baml-fallback\n options {\n strategy [\n GPT4Turbo\n GPT35\n Lottery_SimpleSyntax\n ]\n }\n} \n \nclient<llm> Lottery_SimpleSyntax {\n provider baml-round-robin\n options {\n start 0\n strategy [\n GPT35\n Claude\n ]\n }\n}\n\nclient<llm> TogetherAi {\n provider \"openai-generic\"\n options {\n base_url \"https://api.together.ai/v1\"\n api_key env.TOGETHER_API_KEY\n model \"meta-llama/Llama-3-70b-chat-hf\"\n }\n}\n\n",
"clients.baml": "retry_policy Bar {\n max_retries 3\n strategy {\n type exponential_backoff\n }\n}\n\nretry_policy Foo {\n max_retries 3\n strategy {\n type constant_delay\n delay_ms 100\n }\n}\n\nclient<llm> GPT4 {\n provider openai\n options {\n model gpt-4o\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient<llm> GPT4o {\n provider openai\n options {\n model gpt-4o\n api_key env.OPENAI_API_KEY\n }\n} \n\n\nclient<llm> GPT4Turbo {\n retry_policy Bar\n provider openai\n options {\n model gpt-4-turbo\n api_key env.OPENAI_API_KEY\n }\n} \n\nclient<llm> GPT35 {\n provider openai\n options {\n model \"gpt-3.5-turbo\"\n api_key env.OPENAI_API_KEY\n }\n}\n\nclient<llm> GPT35LegacyProvider {\n provider openai\n options {\n model \"gpt-3.5-turbo\"\n api_key env.OPENAI_API_KEY\n }\n}\n\n\nclient<llm> Ollama {\n provider ollama\n options {\n model llama2\n }\n}\n\nclient<llm> GPT35Azure {\n provider azure-openai\n options {\n resource_name \"west-us-azure-baml\"\n deployment_id \"gpt-35-turbo-default\"\n // base_url \"https://west-us-azure-baml.openai.azure.com/openai/deployments/gpt-35-turbo-default\"\n api_version \"2024-02-01\"\n api_key env.AZURE_OPENAI_API_KEY\n }\n}\n\nclient<llm> Gemini {\n provider google-ai\n options {\n model gemini-1.5-pro-001\n api_key env.GOOGLE_API_KEY\n safetySettings {\n category HARM_CATEGORY_HATE_SPEECH\n threshold BLOCK_LOW_AND_ABOVE\n\n }\n }\n}\n\nclient<llm> Vertex {\n provider vertex-ai \n options {\n model gemini-1.5-pro\n project_id sam-project-vertex-1\n location us-central1\n credentials env.INTEG_TESTS_GOOGLE_APPLICATION_CREDENTIALS\n\n }\n}\n\n\nclient<llm> AwsBedrock {\n provider aws-bedrock\n options {\n inference_configuration {\n max_tokens 100\n }\n model_id \"anthropic.claude-3-haiku-20240307-v1:0\"\n // model_id \"meta.llama3-8b-instruct-v1:0\"\n // model_id \"mistral.mistral-7b-instruct-v0:2\"\n api_key \"\"\n }\n}\n\nclient<llm> Sonnet {\n provider anthropic\n options {\n model claude-3-5-sonnet-20241022\n api_key env.ANTHROPIC_API_KEY\n }\n}\n\nclient<llm> Claude {\n provider anthropic\n options {\n model claude-3-haiku-20240307\n api_key env.ANTHROPIC_API_KEY\n max_tokens 1000\n }\n}\n\nclient<llm> ClaudeWithCaching {\n provider anthropic\n options {\n model claude-3-haiku-20240307\n api_key env.ANTHROPIC_API_KEY\n max_tokens 1000\n allowed_role_metadata [\"cache_control\"]\n headers {\n \"anthropic-beta\" \"prompt-caching-2024-07-31\"\n }\n }\n}\n\nclient<llm> Resilient_SimpleSyntax {\n retry_policy Foo\n provider baml-fallback\n options {\n strategy [\n GPT4Turbo\n GPT35\n Lottery_SimpleSyntax\n ]\n }\n} \n \nclient<llm> Lottery_SimpleSyntax {\n provider baml-round-robin\n options {\n start 0\n strategy [\n GPT35\n Claude\n ]\n }\n}\n\nclient<llm> TogetherAi {\n provider \"openai-generic\"\n options {\n base_url \"https://api.together.ai/v1\"\n api_key env.TOGETHER_API_KEY\n model \"meta-llama/Llama-3-70b-chat-hf\"\n }\n}\n\n",
"custom-task.baml": "class BookOrder {\n orderId string @description(#\"\n The ID of the book order\n \"#)\n title string @description(#\"\n The title of the ordered book\n \"#)\n quantity int @description(#\"\n The quantity of books ordered\n \"#)\n price float @description(#\"\n The price of the book\n \"#)\n}\n\nclass FlightConfirmation {\n confirmationNumber string @description(#\"\n The flight confirmation number\n \"#)\n flightNumber string @description(#\"\n The flight number\n \"#)\n departureTime string @description(#\"\n The scheduled departure time of the flight\n \"#)\n arrivalTime string @description(#\"\n The scheduled arrival time of the flight\n \"#)\n seatNumber string @description(#\"\n The seat number assigned on the flight\n \"#)\n}\n\nclass GroceryReceipt {\n receiptId string @description(#\"\n The ID of the grocery receipt\n \"#)\n storeName string @description(#\"\n The name of the grocery store\n \"#)\n items (string | int | float)[] @description(#\"\n A list of items purchased. Each item consists of a name, quantity, and price.\n \"#)\n totalAmount float @description(#\"\n The total amount spent on groceries\n \"#)\n}\n\nclass CustomTaskResult {\n bookOrder BookOrder | null\n flightConfirmation FlightConfirmation | null\n groceryReceipt GroceryReceipt | null\n}\n\nfunction CustomTask(input: string) -> BookOrder | FlightConfirmation | GroceryReceipt {\n client \"openai/gpt-4o-mini\"\n prompt #\"\n Given the input string, extract either an order for a book, a flight confirmation, or a grocery receipt.\n\n {{ ctx.output_format }}\n\n Input:\n \n {{ input}}\n \"#\n}\n\ntest CustomTask {\n functions [CustomTask]\n args {\n input #\"\nDear [Your Name],\n\nThank you for booking with [Airline Name]! We are pleased to confirm your upcoming flight.\n\nFlight Confirmation Details:\n\nBooking Reference: ABC123\nPassenger Name: [Your Name]\nFlight Number: XY789\nDeparture Date: September 15, 2024\nDeparture Time: 10:30 AM\nArrival Time: 1:45 PM\nDeparture Airport: John F. Kennedy International Airport (JFK), New York, NY\nArrival Airport: Los Angeles International Airport (LAX), Los Angeles, CA\nSeat Number: 12A\nClass: Economy\nBaggage Allowance:\n\nChecked Baggage: 1 piece, up to 23 kg\nCarry-On Baggage: 1 piece, up to 7 kg\nImportant Information:\n\nPlease arrive at the airport at least 2 hours before your scheduled departure.\nCheck-in online via our website or mobile app to save time at the airport.\nEnsure that your identification documents are up to date and match the name on your booking.\nContact Us:\n\nIf you have any questions or need to make changes to your booking, please contact our customer service team at 1-800-123-4567 or email us at support@[airline].com.\n\nWe wish you a pleasant journey and thank you for choosing [Airline Name].\n\nBest regards,\n\n[Airline Name] Customer Service\n \"#\n }\n}",
"fiddle-examples/chain-of-thought.baml": "class Email {\n subject string\n body string\n from_address string\n}\n\nenum OrderStatus {\n ORDERED\n SHIPPED\n DELIVERED\n CANCELLED\n}\n\nclass OrderInfo {\n order_status OrderStatus\n tracking_number string?\n estimated_arrival_date string?\n}\n\nfunction GetOrderInfo(email: Email) -> OrderInfo {\n client GPT4\n prompt #\"\n Given the email below:\n\n ```\n from: {{email.from_address}}\n Email Subject: {{email.subject}}\n Email Body: {{email.body}}\n ```\n\n Extract this info from the email in JSON format:\n {{ ctx.output_format }}\n\n Before you output the JSON, please explain your\n reasoning step-by-step. Here is an example on how to do this:\n 'If we think step by step we can see that ...\n therefore the output JSON is:\n {\n ... the json schema ...\n }'\n \"#\n}",
"fiddle-examples/chat-roles.baml": "// This will be available as an enum in your Python and Typescript code.\nenum Category2 {\n Refund\n CancelOrder\n TechnicalSupport\n AccountIssue\n Question\n}\n\nfunction ClassifyMessage2(input: string) -> Category {\n client GPT4\n\n prompt #\"\n {{ _.role(\"system\") }}\n // You can use _.role(\"system\") to indicate that this text should be a system message\n\n Classify the following INPUT into ONE\n of the following categories:\n\n {{ ctx.output_format }}\n\n {{ _.role(\"user\") }}\n // And _.role(\"user\") to indicate that this text should be a user message\n\n INPUT: {{ input }}\n\n Response:\n \"#\n}",
Expand Down
4 changes: 2 additions & 2 deletions integ-tests/python/report.html
Original file line number Diff line number Diff line change
Expand Up @@ -427,7 +427,7 @@
---Parsed Response (string)---
&#34;Vibrant colors bloom\nMariachi music plays\nMexico&#39;s beauty&#34;
</samp></pre> </div> </div> </details> <details class="phase teardown "> <summary> <h4 class=title> <span class=phase-name>Teardown</span> </h4> </summary> <div class=content> </div> </details> </div> </div> </details> <details class="test passed"> <summary> <h3 class="title test-title"> <span class="status badge passed ">PASSED</span> <span class=test-name> <span class=funcname>test_works_with_failing_azure_fallback</span> </span> <span class=duration>0:00:00.003020</span> </h3> </summary> <div class=content> <table class=metadata> <tr> <th>Started</th> <td>2024-10-21 14:09:56</td> </tr> <tr> <th>Ended</th> <td>2024-10-21 14:09:56</td> </tr> <tr> <th>Duration</th> <td>0:00:00.003020</td> </tr> <tr> <th>Markers</th> <td> <div class=marker> <span class="marker-name badge">asyncio</span> <span class=marker-args> </span> </div> </td> </tr> <tr> <th>Fixtures</th> <td> <span class="badge fixturename"> event_loop_policy </span> <span class="badge fixturename"> cleanup </span> <span class="badge fixturename"> event_loop </span> </td> </tr> </table> <div class=test-phases> <details class="phase setup "> <summary> <h4 class=title> <span class=phase-name>Setup</span> </h4> </summary> <div class=content> </div> </details> <details class="phase call passed" open> <summary> <h4 class=title> <span class="status passed"></span> <span class=phase-name>Call</span> </h4> </summary> <div class=content> </div> </details> <details class="phase teardown "> <summary> <h4 class=title> <span class=phase-name>Teardown</span> </h4> </summary> <div class=content> </div> </details> </div> </div> </details> <details class="test passed"> <summary> <h3 class="title test-title"> <span class="status badge passed ">PASSED</span> <span class=test-name> <span class=funcname>test_claude</span> </span> <span class=duration>0:00:01.274422</span> </h3> </summary> <div class=content> <table class=metadata> <tr> <th>Started</th> <td>2024-10-21 14:09:56</td> </tr> <tr> <th>Ended</th> <td>2024-10-21 14:09:57</td> </tr> <tr> <th>Duration</th> <td>0:00:01.274422</td> </tr> <tr> <th>Markers</th> <td> <div class=marker> <span class="marker-name badge">asyncio</span> <span class=marker-args> </span> </div> </td> </tr> <tr> <th>Fixtures</th> <td> <span class="badge fixturename"> event_loop_policy </span> <span class="badge fixturename"> cleanup </span> <span class="badge fixturename"> event_loop </span> </td> </tr> </table> <div class=test-phases> <details class="phase setup "> <summary> <h4 class=title> <span class=phase-name>Setup</span> </h4> </summary> <div class=content> </div> </details> <details class="phase call passed" open> <summary> <h4 class=title> <span class="status passed"></span> <span class=phase-name>Call</span> </h4> </summary> <div class=content> <div class=section> <h5 class=section-title>Captured stderr call</h5> <pre><samp>[2024-10-21T21:09:57Z INFO baml_events] Function PromptTestClaude:
Client: Sonnet (claude-3-5-sonnet-20240620) - 1265ms. StopReason: &#34;end_turn&#34;
Client: Sonnet (claude-3-5-sonnet-20241022) - 1265ms. StopReason: &#34;end_turn&#34;
---PROMPT---
[chat] user: Tell me a haiku about Mt Rainier is tall

Expand Down Expand Up @@ -766,7 +766,7 @@
Snowcapped peak touches the sky
Nature&#39;s majesty
</samp></pre> </div> <div class=section> <h5 class=section-title>Captured stderr call</h5> <pre><samp>[2024-10-21T21:11:05Z INFO baml_events] Function PromptTestClaude:
Client: Sonnet (claude-3-5-sonnet-20240620) - 1263ms. StopReason: &#34;end_turn&#34;
Client: Sonnet (claude-3-5-sonnet-20241022) - 1263ms. StopReason: &#34;end_turn&#34;
---PROMPT---
[chat] user: Tell me a haiku about Mt Rainier is tall

Expand Down
Loading

0 comments on commit 71df0b7

Please sign in to comment.