diff --git a/docs/README.md b/docs/README.md index 2dd842a53..a0aad137e 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,22 +1 @@ -# Fern Configuration - -View the documentation [here](https://boundary.docs.buildwithfern.com). - -## Updating your Docs - -### Local Development server - -To run a local development server with hot-reloading you can run the following command - -```sh -fern docs dev -``` - -### Hosted URL - -Documentation is automatically updated when you push to main via the `fern generate` command. - -```sh -npm install -g fern-api # only required once -fern generate --docs -``` +If you're looking for docs, go to the [fern/](../fern) directory diff --git a/docs/old/01-guide/04-baml-basics/switching-llms.mdx b/docs/old/01-guide/04-baml-basics/switching-llms.mdx new file mode 100644 index 000000000..22d7157ff --- /dev/null +++ b/docs/old/01-guide/04-baml-basics/switching-llms.mdx @@ -0,0 +1,56 @@ +--- +title: Switching LLMs +--- + +Switch LLMs using the `client` property. You can use the shorthand form, or the longer form with a named client. + +The shorthand form is `/`: + +which uses ANTHROPIC_API_KEY or OPENAI_API_KEY environment variables as the defaults. + +```rust BAML +function MakeHaiku(topic: string) -> string { + client "openai/gpt-4o" // or anthropic/claude-3-5-sonnet-20241022 + prompt #" + Write a haiku about {{ topic }}. + "# +} +``` + +The longer form uses a named client, and supports adding any parameters supported by the provider or changing the temperature, top_p, etc. + +```rust BAML +client MyClient { + provider "openai" + options { + model "gpt-4o" + api_key env.OPENAI_API_KEY + // other params like temperature, top_p, etc. + temperature 0.5 + base_url "https://my-custom-endpoint.com/v1" + // add headers + headers { + "anthropic-beta" "prompt-caching-2024-07-31" + } + } + +} + +function MakeHaiku(topic: string) -> string { + client MyClient + prompt #" + Write a haiku about {{ topic }}. + "# +} +``` + +Consult the [provider documentation](#fields) for a list of supported providers +and models, the default options, and setting [retry policies](/docs/reference/retry-policy). + + +If you want to specify which client to use at runtime, in your Python/TS/Ruby code, +you can use the [client registry](/docs/calling-baml/client-registry) to do so. + +This can come in handy if you're trying to, say, send 10% of your requests to a +different model. + \ No newline at end of file diff --git a/docs/old/README.md b/docs/old/README.md new file mode 100644 index 000000000..2dd842a53 --- /dev/null +++ b/docs/old/README.md @@ -0,0 +1,22 @@ +# Fern Configuration + +View the documentation [here](https://boundary.docs.buildwithfern.com). + +## Updating your Docs + +### Local Development server + +To run a local development server with hot-reloading you can run the following command + +```sh +fern docs dev +``` + +### Hosted URL + +Documentation is automatically updated when you push to main via the `fern generate` command. + +```sh +npm install -g fern-api # only required once +fern generate --docs +``` diff --git a/docs/assets/baml-lamb-white.png b/docs/old/assets/baml-lamb-white.png similarity index 100% rename from docs/assets/baml-lamb-white.png rename to docs/old/assets/baml-lamb-white.png diff --git a/docs/assets/bfcl-baml-latest.png b/docs/old/assets/bfcl-baml-latest.png similarity index 100% rename from docs/assets/bfcl-baml-latest.png rename to docs/old/assets/bfcl-baml-latest.png diff --git a/docs/assets/bfcl-baml.png b/docs/old/assets/bfcl-baml.png similarity index 100% rename from docs/assets/bfcl-baml.png rename to docs/old/assets/bfcl-baml.png diff --git a/docs/assets/dashboard-test-pic.png b/docs/old/assets/dashboard-test-pic.png similarity index 100% rename from docs/assets/dashboard-test-pic.png rename to docs/old/assets/dashboard-test-pic.png diff --git a/docs/assets/favicon.ico b/docs/old/assets/favicon.ico similarity index 100% rename from docs/assets/favicon.ico rename to docs/old/assets/favicon.ico diff --git a/docs/assets/favicon.png b/docs/old/assets/favicon.png similarity index 100% rename from docs/assets/favicon.png rename to docs/old/assets/favicon.png diff --git a/docs/assets/images/analyzebook/prompt-img.png b/docs/old/assets/images/analyzebook/prompt-img.png similarity index 100% rename from docs/assets/images/analyzebook/prompt-img.png rename to docs/old/assets/images/analyzebook/prompt-img.png diff --git a/docs/assets/images/baml/baml-playground.png b/docs/old/assets/images/baml/baml-playground.png similarity index 100% rename from docs/assets/images/baml/baml-playground.png rename to docs/old/assets/images/baml/baml-playground.png diff --git a/docs/assets/images/baml/dashboard-analytics.png b/docs/old/assets/images/baml/dashboard-analytics.png similarity index 100% rename from docs/assets/images/baml/dashboard-analytics.png rename to docs/old/assets/images/baml/dashboard-analytics.png diff --git a/docs/assets/images/baml/dashboard-full-baml.png b/docs/old/assets/images/baml/dashboard-full-baml.png similarity index 100% rename from docs/assets/images/baml/dashboard-full-baml.png rename to docs/old/assets/images/baml/dashboard-full-baml.png diff --git a/docs/assets/images/baml/full-prompt-baml.png b/docs/old/assets/images/baml/full-prompt-baml.png similarity index 100% rename from docs/assets/images/baml/full-prompt-baml.png rename to docs/old/assets/images/baml/full-prompt-baml.png diff --git a/docs/assets/images/classify/level1_playground.png b/docs/old/assets/images/classify/level1_playground.png similarity index 100% rename from docs/assets/images/classify/level1_playground.png rename to docs/old/assets/images/classify/level1_playground.png diff --git a/docs/assets/images/dashboard/dashboard-test-pic.png b/docs/old/assets/images/dashboard/dashboard-test-pic.png similarity index 100% rename from docs/assets/images/dashboard/dashboard-test-pic.png rename to docs/old/assets/images/dashboard/dashboard-test-pic.png diff --git a/docs/assets/images/dashboardtest1.png b/docs/old/assets/images/dashboardtest1.png similarity index 100% rename from docs/assets/images/dashboardtest1.png rename to docs/old/assets/images/dashboardtest1.png diff --git a/docs/assets/images/docs_latest/openapi-diagram.drawio b/docs/old/assets/images/docs_latest/openapi-diagram.drawio similarity index 100% rename from docs/assets/images/docs_latest/openapi-diagram.drawio rename to docs/old/assets/images/docs_latest/openapi-diagram.drawio diff --git a/docs/assets/images/docs_latest/openapi-diagram.drawio.svg b/docs/old/assets/images/docs_latest/openapi-diagram.drawio.svg similarity index 100% rename from docs/assets/images/docs_latest/openapi-diagram.drawio.svg rename to docs/old/assets/images/docs_latest/openapi-diagram.drawio.svg diff --git a/docs/assets/images/docs_latest/vscode-output-dropdown.png b/docs/old/assets/images/docs_latest/vscode-output-dropdown.png similarity index 100% rename from docs/assets/images/docs_latest/vscode-output-dropdown.png rename to docs/old/assets/images/docs_latest/vscode-output-dropdown.png diff --git a/docs/assets/images/docs_latest/vscode-reload-window.png b/docs/old/assets/images/docs_latest/vscode-reload-window.png similarity index 100% rename from docs/assets/images/docs_latest/vscode-reload-window.png rename to docs/old/assets/images/docs_latest/vscode-reload-window.png diff --git a/docs/assets/images/docs_latest/vscode/code-lens.png b/docs/old/assets/images/docs_latest/vscode/code-lens.png similarity index 100% rename from docs/assets/images/docs_latest/vscode/code-lens.png rename to docs/old/assets/images/docs_latest/vscode/code-lens.png diff --git a/docs/assets/images/docs_latest/vscode/dev-tools.png b/docs/old/assets/images/docs_latest/vscode/dev-tools.png similarity index 100% rename from docs/assets/images/docs_latest/vscode/dev-tools.png rename to docs/old/assets/images/docs_latest/vscode/dev-tools.png diff --git a/docs/assets/images/docs_latest/vscode/extension-status.png b/docs/old/assets/images/docs_latest/vscode/extension-status.png similarity index 100% rename from docs/assets/images/docs_latest/vscode/extension-status.png rename to docs/old/assets/images/docs_latest/vscode/extension-status.png diff --git a/docs/assets/images/docs_latest/vscode/open-playground.png b/docs/old/assets/images/docs_latest/vscode/open-playground.png similarity index 100% rename from docs/assets/images/docs_latest/vscode/open-playground.png rename to docs/old/assets/images/docs_latest/vscode/open-playground.png diff --git a/docs/assets/images/docs_latest/vscode/playground-preview.png b/docs/old/assets/images/docs_latest/vscode/playground-preview.png similarity index 100% rename from docs/assets/images/docs_latest/vscode/playground-preview.png rename to docs/old/assets/images/docs_latest/vscode/playground-preview.png diff --git a/docs/assets/images/docs_latest/vscode/test-case-buttons.png b/docs/old/assets/images/docs_latest/vscode/test-case-buttons.png similarity index 100% rename from docs/assets/images/docs_latest/vscode/test-case-buttons.png rename to docs/old/assets/images/docs_latest/vscode/test-case-buttons.png diff --git a/docs/assets/images/docs_latest/vscode/test-cases.png b/docs/old/assets/images/docs_latest/vscode/test-cases.png similarity index 100% rename from docs/assets/images/docs_latest/vscode/test-cases.png rename to docs/old/assets/images/docs_latest/vscode/test-cases.png diff --git a/docs/assets/images/docs_latest/vscode/vscode-settings.png b/docs/old/assets/images/docs_latest/vscode/vscode-settings.png similarity index 100% rename from docs/assets/images/docs_latest/vscode/vscode-settings.png rename to docs/old/assets/images/docs_latest/vscode/vscode-settings.png diff --git a/docs/assets/images/extract-verbs/extract-verbs-nouns-example.png b/docs/old/assets/images/extract-verbs/extract-verbs-nouns-example.png similarity index 100% rename from docs/assets/images/extract-verbs/extract-verbs-nouns-example.png rename to docs/old/assets/images/extract-verbs/extract-verbs-nouns-example.png diff --git a/docs/assets/images/extract-verbs/extract-verbs-prompt-dash.png b/docs/old/assets/images/extract-verbs/extract-verbs-prompt-dash.png similarity index 100% rename from docs/assets/images/extract-verbs/extract-verbs-prompt-dash.png rename to docs/old/assets/images/extract-verbs/extract-verbs-prompt-dash.png diff --git a/docs/assets/images/extract-verbs/stringify2.png b/docs/old/assets/images/extract-verbs/stringify2.png similarity index 100% rename from docs/assets/images/extract-verbs/stringify2.png rename to docs/old/assets/images/extract-verbs/stringify2.png diff --git a/docs/assets/images/glooinit.png b/docs/old/assets/images/glooinit.png similarity index 100% rename from docs/assets/images/glooinit.png rename to docs/old/assets/images/glooinit.png diff --git a/docs/assets/images/hero-dark.svg b/docs/old/assets/images/hero-dark.svg similarity index 100% rename from docs/assets/images/hero-dark.svg rename to docs/old/assets/images/hero-dark.svg diff --git a/docs/assets/images/hero-light.svg b/docs/old/assets/images/hero-light.svg similarity index 100% rename from docs/assets/images/hero-light.svg rename to docs/old/assets/images/hero-light.svg diff --git a/docs/assets/images/v3/AITeam.png b/docs/old/assets/images/v3/AITeam.png similarity index 100% rename from docs/assets/images/v3/AITeam.png rename to docs/old/assets/images/v3/AITeam.png diff --git a/docs/assets/images/v3/AITeam_Boundary.png b/docs/old/assets/images/v3/AITeam_Boundary.png similarity index 100% rename from docs/assets/images/v3/AITeam_Boundary.png rename to docs/old/assets/images/v3/AITeam_Boundary.png diff --git a/docs/assets/images/v3/BAML_compile_fail.png b/docs/old/assets/images/v3/BAML_compile_fail.png similarity index 100% rename from docs/assets/images/v3/BAML_compile_fail.png rename to docs/old/assets/images/v3/BAML_compile_fail.png diff --git a/docs/assets/images/v3/BAML_contract.png b/docs/old/assets/images/v3/BAML_contract.png similarity index 100% rename from docs/assets/images/v3/BAML_contract.png rename to docs/old/assets/images/v3/BAML_contract.png diff --git a/docs/assets/images/v3/BAML_deserializer_1.png b/docs/old/assets/images/v3/BAML_deserializer_1.png similarity index 100% rename from docs/assets/images/v3/BAML_deserializer_1.png rename to docs/old/assets/images/v3/BAML_deserializer_1.png diff --git a/docs/assets/images/v3/BAML_deserializer_2.png b/docs/old/assets/images/v3/BAML_deserializer_2.png similarity index 100% rename from docs/assets/images/v3/BAML_deserializer_2.png rename to docs/old/assets/images/v3/BAML_deserializer_2.png diff --git a/docs/assets/images/v3/BAML_flowchart.png b/docs/old/assets/images/v3/BAML_flowchart.png similarity index 100% rename from docs/assets/images/v3/BAML_flowchart.png rename to docs/old/assets/images/v3/BAML_flowchart.png diff --git a/docs/assets/images/v3/BAML_playground_720p.mov b/docs/old/assets/images/v3/BAML_playground_720p.mov similarity index 100% rename from docs/assets/images/v3/BAML_playground_720p.mov rename to docs/old/assets/images/v3/BAML_playground_720p.mov diff --git a/docs/assets/images/v3/BoundaryT_App.png b/docs/old/assets/images/v3/BoundaryT_App.png similarity index 100% rename from docs/assets/images/v3/BoundaryT_App.png rename to docs/old/assets/images/v3/BoundaryT_App.png diff --git a/docs/assets/images/v3/BoundaryToolchain.png b/docs/old/assets/images/v3/BoundaryToolchain.png similarity index 100% rename from docs/assets/images/v3/BoundaryToolchain.png rename to docs/old/assets/images/v3/BoundaryToolchain.png diff --git a/docs/assets/images/v3/StepByStep.png b/docs/old/assets/images/v3/StepByStep.png similarity index 100% rename from docs/assets/images/v3/StepByStep.png rename to docs/old/assets/images/v3/StepByStep.png diff --git a/docs/assets/images/v3/adapter_classify.png b/docs/old/assets/images/v3/adapter_classify.png similarity index 100% rename from docs/assets/images/v3/adapter_classify.png rename to docs/old/assets/images/v3/adapter_classify.png diff --git a/docs/assets/images/v3/baml-filetree.png b/docs/old/assets/images/v3/baml-filetree.png similarity index 100% rename from docs/assets/images/v3/baml-filetree.png rename to docs/old/assets/images/v3/baml-filetree.png diff --git a/docs/assets/images/v3/baml_filetree.png b/docs/old/assets/images/v3/baml_filetree.png similarity index 100% rename from docs/assets/images/v3/baml_filetree.png rename to docs/old/assets/images/v3/baml_filetree.png diff --git a/docs/assets/images/v3/baml_init.png b/docs/old/assets/images/v3/baml_init.png similarity index 100% rename from docs/assets/images/v3/baml_init.png rename to docs/old/assets/images/v3/baml_init.png diff --git a/docs/assets/images/v3/baml_playground.png b/docs/old/assets/images/v3/baml_playground.png similarity index 100% rename from docs/assets/images/v3/baml_playground.png rename to docs/old/assets/images/v3/baml_playground.png diff --git a/docs/assets/images/v3/boundary_studio_event.png b/docs/old/assets/images/v3/boundary_studio_event.png similarity index 100% rename from docs/assets/images/v3/boundary_studio_event.png rename to docs/old/assets/images/v3/boundary_studio_event.png diff --git a/docs/assets/images/v3/boundary_studio_pipeline.png b/docs/old/assets/images/v3/boundary_studio_pipeline.png similarity index 100% rename from docs/assets/images/v3/boundary_studio_pipeline.png rename to docs/old/assets/images/v3/boundary_studio_pipeline.png diff --git a/docs/assets/images/v3/boundary_studio_timeline.png b/docs/old/assets/images/v3/boundary_studio_timeline.png similarity index 100% rename from docs/assets/images/v3/boundary_studio_timeline.png rename to docs/old/assets/images/v3/boundary_studio_timeline.png diff --git a/docs/assets/images/v3/diagnose_feature1.png b/docs/old/assets/images/v3/diagnose_feature1.png similarity index 100% rename from docs/assets/images/v3/diagnose_feature1.png rename to docs/old/assets/images/v3/diagnose_feature1.png diff --git a/docs/assets/images/v3/extractverbs_playground.png b/docs/old/assets/images/v3/extractverbs_playground.png similarity index 100% rename from docs/assets/images/v3/extractverbs_playground.png rename to docs/old/assets/images/v3/extractverbs_playground.png diff --git a/docs/assets/images/v3/extractverbs_playground2.png b/docs/old/assets/images/v3/extractverbs_playground2.png similarity index 100% rename from docs/assets/images/v3/extractverbs_playground2.png rename to docs/old/assets/images/v3/extractverbs_playground2.png diff --git a/docs/assets/images/v3/generating_baml_clients.png b/docs/old/assets/images/v3/generating_baml_clients.png similarity index 100% rename from docs/assets/images/v3/generating_baml_clients.png rename to docs/old/assets/images/v3/generating_baml_clients.png diff --git a/docs/assets/images/v3/import_test.png b/docs/old/assets/images/v3/import_test.png similarity index 100% rename from docs/assets/images/v3/import_test.png rename to docs/old/assets/images/v3/import_test.png diff --git a/docs/assets/images/v3/open_playground.png b/docs/old/assets/images/v3/open_playground.png similarity index 100% rename from docs/assets/images/v3/open_playground.png rename to docs/old/assets/images/v3/open_playground.png diff --git a/docs/assets/images/v3/pipeline_view.png b/docs/old/assets/images/v3/pipeline_view.png similarity index 100% rename from docs/assets/images/v3/pipeline_view.png rename to docs/old/assets/images/v3/pipeline_view.png diff --git a/docs/assets/images/v3/prompt_view.gif b/docs/old/assets/images/v3/prompt_view.gif similarity index 100% rename from docs/assets/images/v3/prompt_view.gif rename to docs/old/assets/images/v3/prompt_view.gif diff --git a/docs/assets/images/v3/pydantic_serialize1.png b/docs/old/assets/images/v3/pydantic_serialize1.png similarity index 100% rename from docs/assets/images/v3/pydantic_serialize1.png rename to docs/old/assets/images/v3/pydantic_serialize1.png diff --git a/docs/assets/images/v3/pydantic_serialize2.png b/docs/old/assets/images/v3/pydantic_serialize2.png similarity index 100% rename from docs/assets/images/v3/pydantic_serialize2.png rename to docs/old/assets/images/v3/pydantic_serialize2.png diff --git a/docs/assets/images/v3/pydantic_serialize3.png b/docs/old/assets/images/v3/pydantic_serialize3.png similarity index 100% rename from docs/assets/images/v3/pydantic_serialize3.png rename to docs/old/assets/images/v3/pydantic_serialize3.png diff --git a/docs/assets/images/v3/resume_change1.png b/docs/old/assets/images/v3/resume_change1.png similarity index 100% rename from docs/assets/images/v3/resume_change1.png rename to docs/old/assets/images/v3/resume_change1.png diff --git a/docs/assets/images/v3/resume_error.png b/docs/old/assets/images/v3/resume_error.png similarity index 100% rename from docs/assets/images/v3/resume_error.png rename to docs/old/assets/images/v3/resume_error.png diff --git a/docs/assets/images/v3/resume_playground1.png b/docs/old/assets/images/v3/resume_playground1.png similarity index 100% rename from docs/assets/images/v3/resume_playground1.png rename to docs/old/assets/images/v3/resume_playground1.png diff --git a/docs/assets/images/v3/test-extract.gif b/docs/old/assets/images/v3/test-extract.gif similarity index 100% rename from docs/assets/images/v3/test-extract.gif rename to docs/old/assets/images/v3/test-extract.gif diff --git a/docs/assets/images/v3/test_table.png b/docs/old/assets/images/v3/test_table.png similarity index 100% rename from docs/assets/images/v3/test_table.png rename to docs/old/assets/images/v3/test_table.png diff --git a/docs/assets/images/v3/testing_2.gif b/docs/old/assets/images/v3/testing_2.gif similarity index 100% rename from docs/assets/images/v3/testing_2.gif rename to docs/old/assets/images/v3/testing_2.gif diff --git a/docs/assets/log_message.png b/docs/old/assets/log_message.png similarity index 100% rename from docs/assets/log_message.png rename to docs/old/assets/log_message.png diff --git a/docs/assets/logo-dark-mode.svg b/docs/old/assets/logo-dark-mode.svg similarity index 100% rename from docs/assets/logo-dark-mode.svg rename to docs/old/assets/logo-dark-mode.svg diff --git a/docs/assets/logo-light-mode.svg b/docs/old/assets/logo-light-mode.svg similarity index 100% rename from docs/assets/logo-light-mode.svg rename to docs/old/assets/logo-light-mode.svg diff --git a/docs/assets/logo.png b/docs/old/assets/logo.png similarity index 100% rename from docs/assets/logo.png rename to docs/old/assets/logo.png diff --git a/docs/assets/open-sans-v17-all-charsets-300.woff2 b/docs/old/assets/open-sans-v17-all-charsets-300.woff2 similarity index 100% rename from docs/assets/open-sans-v17-all-charsets-300.woff2 rename to docs/old/assets/open-sans-v17-all-charsets-300.woff2 diff --git a/docs/assets/open-sans-v17-all-charsets-700.woff2 b/docs/old/assets/open-sans-v17-all-charsets-700.woff2 similarity index 100% rename from docs/assets/open-sans-v17-all-charsets-700.woff2 rename to docs/old/assets/open-sans-v17-all-charsets-700.woff2 diff --git a/docs/assets/open-sans-v17-all-charsets-italic.woff2 b/docs/old/assets/open-sans-v17-all-charsets-italic.woff2 similarity index 100% rename from docs/assets/open-sans-v17-all-charsets-italic.woff2 rename to docs/old/assets/open-sans-v17-all-charsets-italic.woff2 diff --git a/docs/assets/open-sans-v17-all-charsets-regular.woff2 b/docs/old/assets/open-sans-v17-all-charsets-regular.woff2 similarity index 100% rename from docs/assets/open-sans-v17-all-charsets-regular.woff2 rename to docs/old/assets/open-sans-v17-all-charsets-regular.woff2 diff --git a/docs/assets/styles.css b/docs/old/assets/styles.css similarity index 100% rename from docs/assets/styles.css rename to docs/old/assets/styles.css diff --git a/docs/docs.yml b/docs/old/docs.yml similarity index 100% rename from docs/docs.yml rename to docs/old/docs.yml diff --git a/docs/docs/api/examples.mdx b/docs/old/docs/api/examples.mdx similarity index 100% rename from docs/docs/api/examples.mdx rename to docs/old/docs/api/examples.mdx diff --git a/docs/docs/api/summary.mdx b/docs/old/docs/api/summary.mdx similarity index 100% rename from docs/docs/api/summary.mdx rename to docs/old/docs/api/summary.mdx diff --git a/docs/docs/baml-nextjs/baml-nextjs.mdx b/docs/old/docs/baml-nextjs/baml-nextjs.mdx similarity index 100% rename from docs/docs/baml-nextjs/baml-nextjs.mdx rename to docs/old/docs/baml-nextjs/baml-nextjs.mdx diff --git a/docs/docs/calling-baml/calling-functions.mdx b/docs/old/docs/calling-baml/calling-functions.mdx similarity index 100% rename from docs/docs/calling-baml/calling-functions.mdx rename to docs/old/docs/calling-baml/calling-functions.mdx diff --git a/docs/docs/calling-baml/client-registry.mdx b/docs/old/docs/calling-baml/client-registry.mdx similarity index 100% rename from docs/docs/calling-baml/client-registry.mdx rename to docs/old/docs/calling-baml/client-registry.mdx diff --git a/docs/docs/calling-baml/concurrent-calls.mdx b/docs/old/docs/calling-baml/concurrent-calls.mdx similarity index 100% rename from docs/docs/calling-baml/concurrent-calls.mdx rename to docs/old/docs/calling-baml/concurrent-calls.mdx diff --git a/docs/docs/calling-baml/dynamic-types.mdx b/docs/old/docs/calling-baml/dynamic-types.mdx similarity index 95% rename from docs/docs/calling-baml/dynamic-types.mdx rename to docs/old/docs/calling-baml/dynamic-types.mdx index fcec06446..d43ba27ca 100644 --- a/docs/docs/calling-baml/dynamic-types.mdx +++ b/docs/old/docs/calling-baml/dynamic-types.mdx @@ -183,10 +183,11 @@ async def run(): tb = TypeBuilder() hobbies_enum = tb.add_enum("Hobbies") hobbies_enum.add_value("Soccer") - hobbies_enum.add_value("Reading") + hobbies_enum.add_value("Reading").description("A hobby of reading") address_class = tb.add_class("Address") - address_class.add_property("street", tb.string()) + # You can also add descriptions to your types + address_class.add_property("street", tb.string()).description("The street address") tb.User.add_property("hobby", hobbies_enum.type().optional()) tb.User.add_property("address", address_class.type().optional()) @@ -204,10 +205,10 @@ async function run() { const tb = new TypeBuilder() const hobbiesEnum = tb.addEnum('Hobbies') hobbiesEnum.addValue('Soccer') - hobbiesEnum.addValue('Reading') + hobbiesEnum.addValue('Reading').description('A hobby of reading') const addressClass = tb.addClass('Address') - addressClass.addProperty('street', tb.string()) + addressClass.addProperty('street', tb.string()).description('The street address') tb.User.addProperty('hobby', hobbiesEnum.type().optional()) diff --git a/docs/docs/calling-baml/exceptions.mdx b/docs/old/docs/calling-baml/exceptions.mdx similarity index 100% rename from docs/docs/calling-baml/exceptions.mdx rename to docs/old/docs/calling-baml/exceptions.mdx diff --git a/docs/docs/calling-baml/generate-baml-client.mdx b/docs/old/docs/calling-baml/generate-baml-client.mdx similarity index 100% rename from docs/docs/calling-baml/generate-baml-client.mdx rename to docs/old/docs/calling-baml/generate-baml-client.mdx diff --git a/docs/docs/calling-baml/multi-modal.mdx b/docs/old/docs/calling-baml/multi-modal.mdx similarity index 99% rename from docs/docs/calling-baml/multi-modal.mdx rename to docs/old/docs/calling-baml/multi-modal.mdx index 3b4440fb9..5b8110b42 100644 --- a/docs/docs/calling-baml/multi-modal.mdx +++ b/docs/old/docs/calling-baml/multi-modal.mdx @@ -5,7 +5,7 @@ slug: docs/calling-baml/multi-modal ## Multi-modal input -### Images +### Image Calling a BAML function with an `image` input argument type (see [image types](../snippets/supported-types.mdx)). The `from_url` and `from_base64` methods create an `Image` object based on input type. diff --git a/docs/docs/calling-baml/set-env-vars.mdx b/docs/old/docs/calling-baml/set-env-vars.mdx similarity index 100% rename from docs/docs/calling-baml/set-env-vars.mdx rename to docs/old/docs/calling-baml/set-env-vars.mdx diff --git a/docs/docs/calling-baml/streaming.mdx b/docs/old/docs/calling-baml/streaming.mdx similarity index 100% rename from docs/docs/calling-baml/streaming.mdx rename to docs/old/docs/calling-baml/streaming.mdx diff --git a/docs/docs/calling-baml/validations.mdx b/docs/old/docs/calling-baml/validations.mdx similarity index 98% rename from docs/docs/calling-baml/validations.mdx rename to docs/old/docs/calling-baml/validations.mdx index 6c5e17d04..ac66a90a4 100644 --- a/docs/docs/calling-baml/validations.mdx +++ b/docs/old/docs/calling-baml/validations.mdx @@ -42,7 +42,7 @@ function NextInt8(a: int) -> int @assert(ok_int8, {{ this >= -128 and this < 127 ### Using `@assert` with `Union` Types -Note that when using [`Unions`](../snippets/supported-types.mdx#union-), it is +Note that when using [`Unions`](/ref/baml/types#union-), it is crucial to specify where the `@assert` attribute is applied within the union type, as it is not known until runtime which type the value will be. @@ -83,7 +83,7 @@ regular expressions. In the future, we plan to support shorthand syntax for common assertions to make writing them easier. -For now, see our [Jinja cookbook / guide](../snippets/prompt-syntax/what-is-jinja.mdx) +For now, see our [Jinja cookbook / guide](/ref/prompt-syntax/what-is-jinja) or the [Minijinja filters docs](https://docs.rs/minijinja/latest/minijinja/filters/index.html#functions) for more information on writing expressions. diff --git a/docs/docs/comparisons/langchain.mdx b/docs/old/docs/comparisons/langchain.mdx similarity index 100% rename from docs/docs/comparisons/langchain.mdx rename to docs/old/docs/comparisons/langchain.mdx diff --git a/docs/docs/comparisons/marvin.mdx b/docs/old/docs/comparisons/marvin.mdx similarity index 100% rename from docs/docs/comparisons/marvin.mdx rename to docs/old/docs/comparisons/marvin.mdx diff --git a/docs/docs/comparisons/pydantic.mdx b/docs/old/docs/comparisons/pydantic.mdx similarity index 100% rename from docs/docs/comparisons/pydantic.mdx rename to docs/old/docs/comparisons/pydantic.mdx diff --git a/docs/docs/contact.mdx b/docs/old/docs/contact.mdx similarity index 100% rename from docs/docs/contact.mdx rename to docs/old/docs/contact.mdx diff --git a/docs/docs/dashboard/extraction-api.mdx b/docs/old/docs/dashboard/extraction-api.mdx similarity index 100% rename from docs/docs/dashboard/extraction-api.mdx rename to docs/old/docs/dashboard/extraction-api.mdx diff --git a/docs/docs/doc-snippets/vscode-settings.mdx b/docs/old/docs/doc-snippets/vscode-settings.mdx similarity index 100% rename from docs/docs/doc-snippets/vscode-settings.mdx rename to docs/old/docs/doc-snippets/vscode-settings.mdx diff --git a/docs/docs/get-started/debugging/enable-logging.mdx b/docs/old/docs/get-started/debugging/enable-logging.mdx similarity index 100% rename from docs/docs/get-started/debugging/enable-logging.mdx rename to docs/old/docs/get-started/debugging/enable-logging.mdx diff --git a/docs/docs/get-started/debugging/vscode-playground.mdx b/docs/old/docs/get-started/debugging/vscode-playground.mdx similarity index 100% rename from docs/docs/get-started/debugging/vscode-playground.mdx rename to docs/old/docs/get-started/debugging/vscode-playground.mdx diff --git a/docs/docs/get-started/deploying/aws.mdx b/docs/old/docs/get-started/deploying/aws.mdx similarity index 100% rename from docs/docs/get-started/deploying/aws.mdx rename to docs/old/docs/get-started/deploying/aws.mdx diff --git a/docs/docs/get-started/deploying/docker.mdx b/docs/old/docs/get-started/deploying/docker.mdx similarity index 100% rename from docs/docs/get-started/deploying/docker.mdx rename to docs/old/docs/get-started/deploying/docker.mdx diff --git a/docs/docs/get-started/deploying/nextjs.mdx b/docs/old/docs/get-started/deploying/nextjs.mdx similarity index 100% rename from docs/docs/get-started/deploying/nextjs.mdx rename to docs/old/docs/get-started/deploying/nextjs.mdx diff --git a/docs/docs/get-started/deploying/openapi.mdx b/docs/old/docs/get-started/deploying/openapi.mdx similarity index 100% rename from docs/docs/get-started/deploying/openapi.mdx rename to docs/old/docs/get-started/deploying/openapi.mdx diff --git a/docs/docs/get-started/interactive-demos.mdx b/docs/old/docs/get-started/interactive-demos.mdx similarity index 100% rename from docs/docs/get-started/interactive-demos.mdx rename to docs/old/docs/get-started/interactive-demos.mdx diff --git a/docs/docs/get-started/quickstart/editors-other.mdx b/docs/old/docs/get-started/quickstart/editors-other.mdx similarity index 100% rename from docs/docs/get-started/quickstart/editors-other.mdx rename to docs/old/docs/get-started/quickstart/editors-other.mdx diff --git a/docs/docs/get-started/quickstart/editors-vscode.mdx b/docs/old/docs/get-started/quickstart/editors-vscode.mdx similarity index 100% rename from docs/docs/get-started/quickstart/editors-vscode.mdx rename to docs/old/docs/get-started/quickstart/editors-vscode.mdx diff --git a/docs/docs/get-started/quickstart/openapi.mdx b/docs/old/docs/get-started/quickstart/openapi.mdx similarity index 100% rename from docs/docs/get-started/quickstart/openapi.mdx rename to docs/old/docs/get-started/quickstart/openapi.mdx diff --git a/docs/docs/get-started/quickstart/python.mdx b/docs/old/docs/get-started/quickstart/python.mdx similarity index 100% rename from docs/docs/get-started/quickstart/python.mdx rename to docs/old/docs/get-started/quickstart/python.mdx diff --git a/docs/docs/get-started/quickstart/ruby.mdx b/docs/old/docs/get-started/quickstart/ruby.mdx similarity index 100% rename from docs/docs/get-started/quickstart/ruby.mdx rename to docs/old/docs/get-started/quickstart/ruby.mdx diff --git a/docs/docs/get-started/quickstart/typescript.mdx b/docs/old/docs/get-started/quickstart/typescript.mdx similarity index 100% rename from docs/docs/get-started/quickstart/typescript.mdx rename to docs/old/docs/get-started/quickstart/typescript.mdx diff --git a/docs/docs/get-started/what-is-baml.mdx b/docs/old/docs/get-started/what-is-baml.mdx similarity index 100% rename from docs/docs/get-started/what-is-baml.mdx rename to docs/old/docs/get-started/what-is-baml.mdx diff --git a/docs/docs/incidents/2024-07-10-ssrf-issue-in-fiddle-proxy.mdx b/docs/old/docs/incidents/2024-07-10-ssrf-issue-in-fiddle-proxy.mdx similarity index 100% rename from docs/docs/incidents/2024-07-10-ssrf-issue-in-fiddle-proxy.mdx rename to docs/old/docs/incidents/2024-07-10-ssrf-issue-in-fiddle-proxy.mdx diff --git a/docs/docs/observability/overview.mdx b/docs/old/docs/observability/overview.mdx similarity index 100% rename from docs/docs/observability/overview.mdx rename to docs/old/docs/observability/overview.mdx diff --git a/docs/docs/observability/tracing-tagging.mdx b/docs/old/docs/observability/tracing-tagging.mdx similarity index 100% rename from docs/docs/observability/tracing-tagging.mdx rename to docs/old/docs/observability/tracing-tagging.mdx diff --git a/docs/docs/reference/env-vars.mdx b/docs/old/docs/reference/env-vars.mdx similarity index 100% rename from docs/docs/reference/env-vars.mdx rename to docs/old/docs/reference/env-vars.mdx diff --git a/docs/docs/snippets/class.mdx b/docs/old/docs/snippets/class.mdx similarity index 100% rename from docs/docs/snippets/class.mdx rename to docs/old/docs/snippets/class.mdx diff --git a/docs/docs/snippets/client-constructor.mdx b/docs/old/docs/snippets/client-constructor.mdx similarity index 100% rename from docs/docs/snippets/client-constructor.mdx rename to docs/old/docs/snippets/client-constructor.mdx diff --git a/docs/docs/snippets/clients/fallback.mdx b/docs/old/docs/snippets/clients/fallback.mdx similarity index 100% rename from docs/docs/snippets/clients/fallback.mdx rename to docs/old/docs/snippets/clients/fallback.mdx diff --git a/docs/docs/snippets/clients/overview.mdx b/docs/old/docs/snippets/clients/overview.mdx similarity index 96% rename from docs/docs/snippets/clients/overview.mdx rename to docs/old/docs/snippets/clients/overview.mdx index 64ba4d081..f1dbec8a2 100644 --- a/docs/docs/snippets/clients/overview.mdx +++ b/docs/old/docs/snippets/clients/overview.mdx @@ -20,7 +20,7 @@ client MyClient { provider "openai" options { model "gpt-4o" - // api_key defaults to env.OPENAI_API_KEY + api_key env.OPENAI_API_KEY } } diff --git a/docs/docs/snippets/clients/providers/anthropic.mdx b/docs/old/docs/snippets/clients/providers/anthropic.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/anthropic.mdx rename to docs/old/docs/snippets/clients/providers/anthropic.mdx diff --git a/docs/docs/snippets/clients/providers/aws-bedrock.mdx b/docs/old/docs/snippets/clients/providers/aws-bedrock.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/aws-bedrock.mdx rename to docs/old/docs/snippets/clients/providers/aws-bedrock.mdx diff --git a/docs/docs/snippets/clients/providers/azure.mdx b/docs/old/docs/snippets/clients/providers/azure.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/azure.mdx rename to docs/old/docs/snippets/clients/providers/azure.mdx diff --git a/docs/docs/snippets/clients/providers/gemini.mdx b/docs/old/docs/snippets/clients/providers/gemini.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/gemini.mdx rename to docs/old/docs/snippets/clients/providers/gemini.mdx diff --git a/docs/docs/snippets/clients/providers/groq.mdx b/docs/old/docs/snippets/clients/providers/groq.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/groq.mdx rename to docs/old/docs/snippets/clients/providers/groq.mdx diff --git a/docs/docs/snippets/clients/providers/huggingface.mdx b/docs/old/docs/snippets/clients/providers/huggingface.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/huggingface.mdx rename to docs/old/docs/snippets/clients/providers/huggingface.mdx diff --git a/docs/docs/snippets/clients/providers/keywordsai.mdx b/docs/old/docs/snippets/clients/providers/keywordsai.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/keywordsai.mdx rename to docs/old/docs/snippets/clients/providers/keywordsai.mdx diff --git a/docs/docs/snippets/clients/providers/lmstudio.mdx b/docs/old/docs/snippets/clients/providers/lmstudio.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/lmstudio.mdx rename to docs/old/docs/snippets/clients/providers/lmstudio.mdx diff --git a/docs/docs/snippets/clients/providers/ollama.mdx b/docs/old/docs/snippets/clients/providers/ollama.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/ollama.mdx rename to docs/old/docs/snippets/clients/providers/ollama.mdx diff --git a/docs/docs/snippets/clients/providers/openai-generic.mdx b/docs/old/docs/snippets/clients/providers/openai-generic.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/openai-generic.mdx rename to docs/old/docs/snippets/clients/providers/openai-generic.mdx diff --git a/docs/docs/snippets/clients/providers/openai.mdx b/docs/old/docs/snippets/clients/providers/openai.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/openai.mdx rename to docs/old/docs/snippets/clients/providers/openai.mdx diff --git a/docs/docs/snippets/clients/providers/openrouter.mdx b/docs/old/docs/snippets/clients/providers/openrouter.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/openrouter.mdx rename to docs/old/docs/snippets/clients/providers/openrouter.mdx diff --git a/docs/docs/snippets/clients/providers/together.mdx b/docs/old/docs/snippets/clients/providers/together.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/together.mdx rename to docs/old/docs/snippets/clients/providers/together.mdx diff --git a/docs/docs/snippets/clients/providers/vertex.mdx b/docs/old/docs/snippets/clients/providers/vertex.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/vertex.mdx rename to docs/old/docs/snippets/clients/providers/vertex.mdx diff --git a/docs/docs/snippets/clients/providers/vllm.mdx b/docs/old/docs/snippets/clients/providers/vllm.mdx similarity index 100% rename from docs/docs/snippets/clients/providers/vllm.mdx rename to docs/old/docs/snippets/clients/providers/vllm.mdx diff --git a/docs/docs/snippets/clients/retry.mdx b/docs/old/docs/snippets/clients/retry.mdx similarity index 100% rename from docs/docs/snippets/clients/retry.mdx rename to docs/old/docs/snippets/clients/retry.mdx diff --git a/docs/docs/snippets/clients/round-robin.mdx b/docs/old/docs/snippets/clients/round-robin.mdx similarity index 100% rename from docs/docs/snippets/clients/round-robin.mdx rename to docs/old/docs/snippets/clients/round-robin.mdx diff --git a/docs/docs/snippets/enum.mdx b/docs/old/docs/snippets/enum.mdx similarity index 100% rename from docs/docs/snippets/enum.mdx rename to docs/old/docs/snippets/enum.mdx diff --git a/docs/docs/snippets/functions/classification.mdx b/docs/old/docs/snippets/functions/classification.mdx similarity index 100% rename from docs/docs/snippets/functions/classification.mdx rename to docs/old/docs/snippets/functions/classification.mdx diff --git a/docs/docs/snippets/functions/extraction.mdx b/docs/old/docs/snippets/functions/extraction.mdx similarity index 100% rename from docs/docs/snippets/functions/extraction.mdx rename to docs/old/docs/snippets/functions/extraction.mdx diff --git a/docs/docs/snippets/functions/function-calling.mdx b/docs/old/docs/snippets/functions/function-calling.mdx similarity index 88% rename from docs/docs/snippets/functions/function-calling.mdx rename to docs/old/docs/snippets/functions/function-calling.mdx index 9b5fb57f8..808c9c5b1 100644 --- a/docs/docs/snippets/functions/function-calling.mdx +++ b/docs/old/docs/snippets/functions/function-calling.mdx @@ -55,9 +55,9 @@ function UseTool(user_message: string) -> (WeatherAPI | MyOtherAPI)[] { ``` ## Function-calling APIs vs Prompting -Injecting your function schemas into the prompt, as BAML does, outperforms function-calling across all benchmarks for major providers (see [Berkeley's Function-calling Leaderboard](https://gorilla.cs.berkeley.edu/leaderboard.html), where "Prompt" outperforms "FC"). +Injecting your function schemas into the prompt, as BAML does, outperforms function-calling across all benchmarks for major providers ([see our Berkeley FC Benchmark results with BAML](https://www.boundaryml.com/blog/sota-function-calling?q=0)). Keep in mind that "JSON mode" is nearly the same thing as "prompting", but it enforces the LLM response is ONLY a JSON blob. BAML does not use JSON mode since it allows developers to use better prompting techniques like chain-of-thought, to allow the LLM to express its reasoning before printing out the actual schema. BAML's parser can find the json schema(s) out of free-form text for you. -BAML may support native function-calling APIs in the future (please let us know more about your use-case so we can prioritize accordingly) \ No newline at end of file +BAML will support native function-calling APIs in the future (please let us know more about your use-case so we can prioritize accordingly) diff --git a/docs/docs/snippets/functions/overview.mdx b/docs/old/docs/snippets/functions/overview.mdx similarity index 100% rename from docs/docs/snippets/functions/overview.mdx rename to docs/old/docs/snippets/functions/overview.mdx diff --git a/docs/docs/snippets/prompt-syntax/comments.mdx b/docs/old/docs/snippets/prompt-syntax/comments.mdx similarity index 100% rename from docs/docs/snippets/prompt-syntax/comments.mdx rename to docs/old/docs/snippets/prompt-syntax/comments.mdx diff --git a/docs/docs/snippets/prompt-syntax/conditionals.mdx b/docs/old/docs/snippets/prompt-syntax/conditionals.mdx similarity index 100% rename from docs/docs/snippets/prompt-syntax/conditionals.mdx rename to docs/old/docs/snippets/prompt-syntax/conditionals.mdx diff --git a/docs/docs/snippets/prompt-syntax/ctx.mdx b/docs/old/docs/snippets/prompt-syntax/ctx.mdx similarity index 100% rename from docs/docs/snippets/prompt-syntax/ctx.mdx rename to docs/old/docs/snippets/prompt-syntax/ctx.mdx diff --git a/docs/docs/snippets/prompt-syntax/loops.mdx b/docs/old/docs/snippets/prompt-syntax/loops.mdx similarity index 100% rename from docs/docs/snippets/prompt-syntax/loops.mdx rename to docs/old/docs/snippets/prompt-syntax/loops.mdx diff --git a/docs/docs/snippets/prompt-syntax/output-format.mdx b/docs/old/docs/snippets/prompt-syntax/output-format.mdx similarity index 100% rename from docs/docs/snippets/prompt-syntax/output-format.mdx rename to docs/old/docs/snippets/prompt-syntax/output-format.mdx diff --git a/docs/docs/snippets/prompt-syntax/roles.mdx b/docs/old/docs/snippets/prompt-syntax/roles.mdx similarity index 100% rename from docs/docs/snippets/prompt-syntax/roles.mdx rename to docs/old/docs/snippets/prompt-syntax/roles.mdx diff --git a/docs/docs/snippets/prompt-syntax/variables.mdx b/docs/old/docs/snippets/prompt-syntax/variables.mdx similarity index 100% rename from docs/docs/snippets/prompt-syntax/variables.mdx rename to docs/old/docs/snippets/prompt-syntax/variables.mdx diff --git a/docs/docs/snippets/prompt-syntax/what-is-jinja.mdx b/docs/old/docs/snippets/prompt-syntax/what-is-jinja.mdx similarity index 100% rename from docs/docs/snippets/prompt-syntax/what-is-jinja.mdx rename to docs/old/docs/snippets/prompt-syntax/what-is-jinja.mdx diff --git a/docs/docs/snippets/snippet-example.mdx b/docs/old/docs/snippets/snippet-example.mdx similarity index 100% rename from docs/docs/snippets/snippet-example.mdx rename to docs/old/docs/snippets/snippet-example.mdx diff --git a/docs/docs/snippets/supported-types.mdx b/docs/old/docs/snippets/supported-types.mdx similarity index 100% rename from docs/docs/snippets/supported-types.mdx rename to docs/old/docs/snippets/supported-types.mdx diff --git a/docs/docs/snippets/syntax/comments.mdx b/docs/old/docs/snippets/syntax/comments.mdx similarity index 100% rename from docs/docs/snippets/syntax/comments.mdx rename to docs/old/docs/snippets/syntax/comments.mdx diff --git a/docs/docs/snippets/syntax/dictionaries.mdx b/docs/old/docs/snippets/syntax/dictionaries.mdx similarity index 100% rename from docs/docs/snippets/syntax/dictionaries.mdx rename to docs/old/docs/snippets/syntax/dictionaries.mdx diff --git a/docs/docs/snippets/syntax/lists.mdx b/docs/old/docs/snippets/syntax/lists.mdx similarity index 100% rename from docs/docs/snippets/syntax/lists.mdx rename to docs/old/docs/snippets/syntax/lists.mdx diff --git a/docs/docs/snippets/syntax/strings.mdx b/docs/old/docs/snippets/syntax/strings.mdx similarity index 100% rename from docs/docs/snippets/syntax/strings.mdx rename to docs/old/docs/snippets/syntax/strings.mdx diff --git a/docs/docs/snippets/template-string.mdx b/docs/old/docs/snippets/template-string.mdx similarity index 100% rename from docs/docs/snippets/template-string.mdx rename to docs/old/docs/snippets/template-string.mdx diff --git a/docs/docs/snippets/test-cases.mdx b/docs/old/docs/snippets/test-cases.mdx similarity index 100% rename from docs/docs/snippets/test-cases.mdx rename to docs/old/docs/snippets/test-cases.mdx diff --git a/docs/favicon.png b/docs/old/favicon.png similarity index 100% rename from docs/favicon.png rename to docs/old/favicon.png diff --git a/docs/fern.config.json b/docs/old/fern.config.json similarity index 100% rename from docs/fern.config.json rename to docs/old/fern.config.json diff --git a/docs/generators.yml b/docs/old/generators.yml similarity index 100% rename from docs/generators.yml rename to docs/old/generators.yml diff --git a/docs/openapi/openapi.yaml b/docs/old/openapi/openapi.yaml similarity index 100% rename from docs/openapi/openapi.yaml rename to docs/old/openapi/openapi.yaml diff --git a/docs/package.json b/docs/old/package.json similarity index 100% rename from docs/package.json rename to docs/old/package.json diff --git a/docs/snippets/allowed-role-metadata-basic.mdx b/docs/old/snippets/allowed-role-metadata-basic.mdx similarity index 100% rename from docs/snippets/allowed-role-metadata-basic.mdx rename to docs/old/snippets/allowed-role-metadata-basic.mdx diff --git a/docs/snippets/allowed-role-metadata.mdx b/docs/old/snippets/allowed-role-metadata.mdx similarity index 100% rename from docs/snippets/allowed-role-metadata.mdx rename to docs/old/snippets/allowed-role-metadata.mdx diff --git a/docs/snippets/client-constructor.mdx b/docs/old/snippets/client-constructor.mdx similarity index 100% rename from docs/snippets/client-constructor.mdx rename to docs/old/snippets/client-constructor.mdx diff --git a/docs/snippets/snippet-example.mdx b/docs/old/snippets/snippet-example.mdx similarity index 100% rename from docs/snippets/snippet-example.mdx rename to docs/old/snippets/snippet-example.mdx diff --git a/engine/baml-runtime/src/cli/init.rs b/engine/baml-runtime/src/cli/init.rs index 87a30be03..844601ead 100644 --- a/engine/baml-runtime/src/cli/init.rs +++ b/engine/baml-runtime/src/cli/init.rs @@ -19,7 +19,7 @@ pub struct InitArgs { #[arg( long, - help = r#"The OpenAPI client generator to run, if --client-type=openapi. + help = r#"The OpenAPI client generator to run, if --client-type=rest/openapi. Examples: "go", "java", "php", "ruby", "rust". See full list at https://github.com/OpenAPITools/openapi-generator#overview."# )] openapi_client_type: Option, diff --git a/fern b/fern deleted file mode 120000 index ffcee8577..000000000 --- a/fern +++ /dev/null @@ -1 +0,0 @@ -./docs \ No newline at end of file diff --git a/fern/01-guide/01-editors/cursor.mdx b/fern/01-guide/01-editors/cursor.mdx new file mode 100644 index 000000000..ae02fbe45 --- /dev/null +++ b/fern/01-guide/01-editors/cursor.mdx @@ -0,0 +1,8 @@ +--- +title: Cursor +--- +Refer to the [Cursor Extension Installation Guide](https://www.cursor.com/how-to-install-extension) to install the extension in Cursor. + + +You may need to update BAML extension manually using the process above. Auto-update does not seem to be working well for many extensions in Cursor. + \ No newline at end of file diff --git a/fern/01-guide/01-editors/others.mdx b/fern/01-guide/01-editors/others.mdx new file mode 100644 index 000000000..96ee63baa --- /dev/null +++ b/fern/01-guide/01-editors/others.mdx @@ -0,0 +1,15 @@ +We don't currently have any tier support for any other editors. + +* JetBrains IDEs +* Helix +* Zed +* Vim +* Emacs +* Sublime Text +* Atom + + +Since the extension is a language server, we can technically pull out the language server and syntax highlighter and support any editor supporting the language server protocol. +If you're interested in contributing to the project and supporting another editor, [please reach out](/contact). + +An alternative is to edit your files in our [Playground](https://www.promptfiddle.com/), and copy the code into your editor, but we recommend using VSCode to edit BAML files for now. diff --git a/fern/01-guide/01-editors/vscode.mdx b/fern/01-guide/01-editors/vscode.mdx new file mode 100644 index 000000000..5aa01d822 --- /dev/null +++ b/fern/01-guide/01-editors/vscode.mdx @@ -0,0 +1,65 @@ +We provide a BAML VSCode extension: https://marketplace.visualstudio.com/items?itemName=Boundary.baml-extension + + + +| Feature | Supported | +|---------|-----------| +| Syntax highlighting for BAML files | ✅ | +| Code snippets for BAML | ✅ | +| LLM playground for testing BAML functions | ✅ | +| Jump to definition for BAML files | ✅ | +| Jump to definition between Python/TS files and BAML files | ✅ | +| Auto generate `baml_client` on save | ✅ | +| BAML formatter | ❌ | + +## Opening BAML Playground + +Once you open a `.baml` file, in VSCode, you should see a small button over every BAML function: `Open Playground`. + + + +Or type `BAML Playground` in the VSCode Command Bar (`CMD + Shift + P` or `CTRL + Shift + P`) to open the playground. + + + +## Setting Env Variables + +Click on the `Settings` button in top right of the playground and set the environment variables. + +It should have an indicator saying how many unset variables are there. + + + +The playground should persist the environment variables between closing and opening VSCode. + + + You can set environment variables lazily. If anything is unset you'll get an error when you run the function. + + + + Environment Variables are stored in VSCode's local storage! We don't save any additional data to disk, or send them across the network. + + + +## Running Tests + +- Click on the `Run All Tests` button in the playground. + +- Press the `▶️` button next to an individual test case to run that just that test case. + + +## Switching Functions + +The playground will automatically switch to the function you're currently editing. + +To manually change it, click on the current function name in the playground (next to the dropdown) and search for your desired function. + +## Switching Test Cases + +The test case with the highlighted background is the currently rendered test case. Clicking on a different test case will render that test case. + + + +You can toggle between seeing the results of all test cases or all test cases for the current function. + + diff --git a/fern/01-guide/02-languages/python.mdx b/fern/01-guide/02-languages/python.mdx new file mode 100644 index 000000000..fbef24c4c --- /dev/null +++ b/fern/01-guide/02-languages/python.mdx @@ -0,0 +1,184 @@ +You can check out this repo: +https://github.com/BoundaryML/baml-examples/tree/main/python-fastapi-starter + +To set up BAML with Python do the following: + + + ### Install BAML VSCode/Cursor Extension + https://marketplace.visualstudio.com/items?itemName=boundary.baml-extension + + - syntax highlighting + - testing playground + - prompt previews + + + In your VSCode User Settings, highly recommend adding this to get better autocomplete for python in general, not just BAML. + + ```json + { + "python.analysis.typeCheckingMode": "basic" + } + ``` + + + ### Install BAML + + ```bash pip + pip install baml-py + ``` + + ```bash poetry + poetry add baml-py + ``` + + ```bash uv + uv add baml-py + ``` + + + ### Add BAML to your existing project + This will give you some starter BAML code in a `baml_src` directory. + + + ```bash pip + baml-cli init + ``` + + ```bash poetry + poetry run baml-cli init + ``` + + ```bash uv + uv run baml-cli init + ``` + + + ### Generate the `baml_client` python module from `.baml` files + + One of the files in your `baml_src` directory will have a [generator block](/ref/baml/generator). The next commmand will auto-generate the `baml_client` directory, which will have auto-generated python code to call your BAML functions. + + Any types defined in .baml files will be converted into Pydantic models in the `baml_client` directory. + + + + ```bash pip + baml-cli generate + ``` + + + ```bash poetry + poetry run baml-cli generate + ``` + + ```bash uv + uv run baml-cli generate + ``` + + + See [What is baml_client](/guide/introduction/baml_client) to learn more about how this works. + + + + If you set up the [VSCode extension](https://marketplace.visualstudio.com/items?itemName=Boundary.baml-extension), it will automatically run `baml-cli generate` on saving a BAML file. + + + + ### Use a BAML function in Python! + If `baml_client` doesn't exist, make sure to run the previous step! + + + ```python main.py + from baml_client.sync_client import b + from baml_client.types import Resume + + def example(raw_resume: str) -> Resume: + # BAML's internal parser guarantees ExtractResume + # to be always return a Resume type + response = b.ExtractResume(raw_resume) + return response + + def example_stream(raw_resume: str) -> Resume: + stream = b.stream.ExtractResume(raw_resume) + for msg in stream: + print(msg) # This will be a PartialResume type + + # This will be a Resume type + final = stream.get_final_response() + + return final + ``` + + ```python async_main.py + from baml_client.async_client import b + from baml_client.types import Resume + + async def example(raw_resume: str) -> Resume: + # BAML's internal parser guarantees ExtractResume + # to be always return a Resume type + response = await b.ExtractResume(raw_resume) + return response + + async def example_stream(raw_resume: str) -> Resume: + stream = b.stream.ExtractResume(raw_resume) + async for msg in stream: + print(msg) # This will be a PartialResume type + + # This will be a Resume type + final = stream.get_final_response() + + return final + ``` + + + + + +## BAML with Jupyter Notebooks + +You can use the baml_client in a Jupyter notebook. + +One of the common problems is making sure your code changes are picked up by the notebook without having to restart the whole kernel (and re-run all the cells) + +**To make sure your changes in .baml files are reflected in your notebook you must do these steps:** + + +### Setup the autoreload extension + +```python cell0 +%load_ext autoreload +%autoreload 2 +``` +This will make sure to reload imports, such as baml_client's "b" object before every cell runs. + +### Import baml_client module in your notebook + +Note it's different from how we import in python. +```python cell1 +# Assuming your baml_client is inside a dir called app/ +import app.baml_client as client # you can name this "llm" or "baml" or whatever you want +``` + +Usually we import things as +`from baml_client import b`, and we can call our functions using `b`, but the `%autoreload` notebook extension does not work well with `from...import` statements. + + +### Call BAML functions using the module name as a prefix + +```python cell2 +raw_resume = "Here's some resume text" +client.b.ExtractResume(raw_resume) +``` +Now your changes in .baml files are reflected in your notebook automatically, without needing to restart the Jupyter kernel. + + +If you want to keep using the `from baml_client import b` style, you'll just need to re-import it everytime you regenerate the baml_client. + + + +Pylance will complain about any schema changes you make in .baml files. You can ignore these errors. If you want it to pick up your new types, you'll need to restart the kernel. +This auto-reload approach works best if you're only making changes to the prompts. + + + + +You're all set! Continue on to the [Deployment Guides](/docs/get-started/deploying) for your language to learn how to deploy your BAML code or check out the [Interactive Examples](https://baml-examples.vercel.app/) to see more examples. \ No newline at end of file diff --git a/fern/01-guide/02-languages/rest.mdx b/fern/01-guide/02-languages/rest.mdx new file mode 100644 index 000000000..1fedf7979 --- /dev/null +++ b/fern/01-guide/02-languages/rest.mdx @@ -0,0 +1,504 @@ + + Requires BAML version >=0.55 + + + + This feature is a preview feature and may change. Please provide feedback either + in [Discord][discord] or on [GitHub][openapi-feedback-github-issue] so that + we can stabilize the feature and keep you updated! + + +BAML allows you to expose your BAML functions as RESTful APIs: + + + +We integrate with [OpenAPI](openapi) (universal API definitions), so you can get typesafe client libraries for free! + + + ### Install BAML VSCode Extension + https://marketplace.visualstudio.com/items?itemName=boundary.baml-extension + + - syntax highlighting + - testing playground + - prompt previews + + ### Install NPX + OpenAPI + + + + ```bash + brew install npm openapi-generator + # 'npm' will install npx + # 'openapi-generator' will install both Java and openapi-generator-cli + ``` + + + + OpenAPI requires `default-jdk` + + ```bash + apt install npm default-jdk -y + # 'npm' will install npx; 'default-jdk' will install java + ``` + + + + OpenAPI requires Java + + ```bash + dnf install npm java-21-openjdk -y + # dnf is the successor to yum + ``` + + Amazon Linux 2023: + ```bash + dnf install npm java-21-amazon-corretto -y + # 'npm' will install npx + # 'java-21-amazon-corretto' will install java + ``` + + Amazon Linux 2: + ```bash + curl -sL https://rpm.nodesource.com/setup_16.x | bash - + yum install nodejs -y + # 'nodejs' will install npx + amazon-linux-extras install java-openjdk11 -y + # 'java-openjdk11' will install java + ``` + + + + To install `npx` and `java` (for OpenAPI): + + 1. Use the [Node.js installer](https://nodejs.org/en/download/prebuilt-installer) to install `npx` (default installer settings are fine). + 2. Run `npm install -g npm@latest` to update `npx` (there is currently an [issue][npx-windows-issue] with the default install of `npx` on Windows where it doesn't work out of the box). + 3. Run the [Adoptium OpenJDK `.msi` installer](https://adoptium.net/temurin/releases/?os=windows) (install the JDK; default installer settings are fine). + + You can verify that `npx` and `java` are installed by running: + + ```powershell + npx -version + java -version + ``` + + + + To install `npx`, use the [Node.js installer](https://nodejs.org/en/download/prebuilt-installer). + + To install `java` (for OpenAPI), use the [Adoptium OpenJDK packages](https://adoptium.net/installation/linux/). + + + + ### Add BAML to your existing project + This will give you some starter BAML code in a `baml_src` directory. + + + + ```bash + npx @boundaryml/baml init \ + --client-type rest/openapi --openapi-client-type csharp + ``` + + + + + OpenAPI supports [5 different C++ client types][openapi-client-types]; + any of them will work with BAML. + + ```bash + npx @boundaryml/baml init \ + --client-type rest/openapi --openapi-client-type cpp-restsdk + ``` + + + + ```bash + npx @boundaryml/baml init \ + --client-type rest/openapi --openapi-client-type go + ``` + + + + + ```bash + npx @boundaryml/baml init \ + --client-type rest/openapi --openapi-client-type java + ``` + + Notice that `on_generate` has been initialized for you to: + + - run the OpenAPI generator to generate a Java client library, and _also_ + - run `mvn clean install` to install the generated client library to your + local Maven repository + + + If you only use Maven through an IDE (e.g. IntelliJ IDEA), you should + remove `&& mvn clean install` from the generated `on_generate` command. + + + + + + ```bash + npx @boundaryml/baml init \ + --client-type rest/openapi --openapi-client-type php + ``` + + + + ```bash + npx @boundaryml/baml init \ + --client-type rest/openapi --openapi-client-type ruby + ``` + + + + ```bash + npx @boundaryml/baml init \ + --client-type rest/openapi --openapi-client-type rust + ``` + + + + + As long as there's an OpenAPI client generator that works with your stack, + you can use it with BAML. Check out the [full list in the OpenAPI docs][openapi-client-types]. + + ```bash + npx @boundaryml/baml init \ + --client-type rest/openapi --openapi-client-type $OPENAPI_CLIENT_TYPE + ``` + + + + + ### Start the BAML development server + + ```bash + npx @boundaryml/baml dev --preview + ``` + + This will do four things: + + - serve your BAML functions over a RESTful interface on `localhost:2024` + - generate an OpenAPI schema in `baml_client/openapi.yaml` + - run `openapi-generator -g $OPENAPI_CLIENT_TYPE` in `baml_client` directory to + generate an OpenAPI client for you to use + - re-run the above steps whenever you modify any `.baml` files + + + BAML-over-REST is currently a preview feature. Please provide feedback + either in [Discord][discord] or on [GitHub][openapi-feedback-github-issue] + so that we can stabilize the feature and keep you updated! + + + ### Use a BAML function in any language! + + `openapi-generator` will generate a `README` with instructions for installing + and using your client; we've included snippets for some of the most popular + languages below. Check out + [`baml-examples`](https://github.com/BoundaryML/baml-examples) for example + projects with instructions for running them. + + + We've tested the below listed OpenAPI clients, but not all of them. If you run + into issues with any of the OpenAPI clients, please let us know, either in + [Discord][discord] or by commenting on + [GitHub][openapi-feedback-github-issue] so that we can either help you out + or fix it! + + + + + + +Run this with `go run main.go`: + +```go main.go +package main + +import ( + "context" + "fmt" + "log" + baml "my-golang-app/baml_client" +) + +func main() { + cfg := baml.NewConfiguration() + b := baml.NewAPIClient(cfg).DefaultAPI + extractResumeRequest := baml.ExtractResumeRequest{ + Resume: "Ada Lovelace (@gmail.com) was an English mathematician and writer", + } + resp, r, err := b.ExtractResume(context.Background()).ExtractResumeRequest(extractResumeRequest).Execute() + if err != nil { + fmt.Printf("Error when calling b.ExtractResume: %v\n", err) + fmt.Printf("Full HTTP response: %v\n", r) + return + } + log.Printf("Response from server: %v\n", resp) +} +``` + + + +First, add the OpenAPI-generated client to your project. + + + + + +You can use the default `on_generate` command, which will tell `baml dev` to +install the OpenAPI-generated client into your local Maven repository by running +`mvn clean install` every time you save a change to a BAML file. + +To depend on the client in your local Maven repo, you can use these configs: + + +```xml pom.xml + + org.openapitools + openapi-java-client + 0.1.0 + compile + +``` + +```kotlin settings.gradle.kts +repositories { + mavenCentral() + mavenLocal() +} + +dependencies { + implementation("org.openapitools:openapi-java-client:0.1.0") +} +``` + + + + + + +You'll probably want to comment out `on_generate` and instead use either the [OpenAPI Maven plugin] or [OpenAPI Gradle plugin] to build your OpenAPI client. + +[OpenAPI Maven plugin]: https://github.com/OpenAPITools/openapi-generator/tree/master/modules/openapi-generator-maven-plugin +[OpenAPI Gradle plugin]: https://github.com/OpenAPITools/openapi-generator/tree/master/modules/openapi-generator-gradle-plugin + + +```xml pom.xml + + + + org.openapitools + openapi-generator-maven-plugin + 7.8.0 + + + + generate + + + ${project.basedir}/baml_client/openapi.yaml + baml + ${project.build.directory}/generated-sources/openapi + com.boundaryml.baml_client.api + com.boundaryml.baml_client.model + com.boundaryml.baml_client + true + + + + + + +``` + +```kotlin settings.gradle.kts +plugins { + id("org.openapi.generator") version "7.8.0" +} + +openApiGenerate { + generatorName.set("java") // Change to 'kotlin', 'spring', etc. if needed + inputSpec.set("${projectDir}/baml_client/openapi.yaml") + outputDir.set("$buildDir/generated-sources/openapi") + apiPackage.set("com.boundaryml.baml_client.api") + modelPackage.set("com.boundaryml.baml_client.model") + invokerPackage.set("com.boundaryml.baml_client") + additionalProperties.set(mapOf("java8" to "true")) +} + +sourceSets["main"].java { + srcDir("$buildDir/generated-sources/openapi/src/main/java") +} + +tasks.named("compileJava") { + dependsOn("openApiGenerate") +} +``` + + + + + +Then, copy this code into wherever your `main` function is: + +```Java +import com.boundaryml.baml_client.ApiClient; +import com.boundaryml.baml_client.ApiException; +import com.boundaryml.baml_client.Configuration; +// NOTE: baml_client/README.md will suggest importing from models.* - that is wrong. +// See https://github.com/OpenAPITools/openapi-generator/issues/19431 for more details. +import com.boundaryml.baml_client.model.*; +import com.boundaryml.baml_client.api.DefaultApi; + +public class Example { + public static void main(String[] args) { + ApiClient defaultClient = Configuration.getDefaultApiClient(); + DefaultApi apiInstance = new DefaultApi(defaultClient); + ExtractResumeRequest extractResumeRequest = new ExtractResumeRequest(); // ExtractResumeRequest | + try { + Resume result = apiInstance.extractResume(extractResumeRequest); + System.out.println(result); + } catch (ApiException e) { + System.err.println("Exception when calling DefaultApi#extractResume"); + System.err.println("Status code: " + e.getCode()); + System.err.println("Reason: " + e.getResponseBody()); + System.err.println("Response headers: " + e.getResponseHeaders()); + e.printStackTrace(); + } + } +} + +``` + + + + + + The PHP OpenAPI generator doesn't support OpenAPI's `oneOf` type, which is + what we map BAML union types to. Please let us know if this is an issue for + you, and you need help working around it. + + +First, add the OpenAPI-generated client to your project: + +```json composer.json + "repositories": [ + { + "type": "path", + "url": "baml_client" + } + ], + "require": { + "boundaryml/baml-client": "*@dev" + } +``` + +You can now use this code to call a BAML function: + +```PHP +setResume("Marie Curie was a Polish and naturalised-French physicist and chemist who conducted pioneering research on radioactivity"); + +try { + $result = $apiInstance->extractResume($extract_resume_request); + print_r($result); +} catch (Exception $e) { + echo 'Exception when calling DefaultApi->extractResume: ', $e->getMessage(), PHP_EOL; +} +``` + + + + +Use `ruby -Ilib/baml_client app.rb` to run this: + +```ruby app.rb +require 'baml_client' +require 'pp' + +api_client = BamlClient::ApiClient.new +b = BamlClient::DefaultApi.new(api_client) + +extract_resume_request = BamlClient::ExtractResumeRequest.new( + resume: <<~RESUME + John Doe + + Education + - University of California, Berkeley + - B.S. in Computer Science + - graduated 2020 + + Skills + - Python + - Java + - C++ + RESUME +) + +begin + result = b.extract_resume(extract_resume_request) + pp result + + edu0 = result.education[0] + puts "Education: #{edu0.school}, #{edu0.degree}, #{edu0.year}" +rescue BamlClient::ApiError => e + puts "Error when calling DefaultApi#extract_resume" + pp e +end +``` + + + + + + If you're using `cargo watch -- cargo build` and seeing build failures because it can't find + the generated `baml_client`, try increasing the delay on `cargo watch` to 1 second like so: + + ```bash + cargo watch --delay 1 -- cargo build + ``` + + +First, add the OpenAPI-generated client to your project: + +```toml Cargo.toml +[dependencies] +baml-client = { path = "./baml_client" } +``` + +You can now use `cargo run`: + +```rust +use baml_client::models::ExtractResumeRequest; +use baml_client::apis::default_api as b; + +#[tokio::main] +async fn main() { + let config = baml_client::apis::configuration::Configuration::default(); + + let resp = b::extract_resume(&config, ExtractResumeRequest { + resume: "Tony Hoare is a British computer scientist who has made foundational contributions to programming languages, algorithms, operating systems, formal verification, and concurrent computing.".to_string(), + }).await.unwrap(); + + println!("{:#?}", resp); +} +``` + + + + + + +[discord]: https://discord.gg/BTNBeXGuaS +[openapi-feedback-github-issue]: https://github.com/BoundaryML/baml/issues/892 +[npx-windows-issue]: https://github.com/nodejs/node/issues/53538 +[openapi-client-types]: https://github.com/OpenAPITools/openapi-generator#overview diff --git a/fern/01-guide/02-languages/ruby.mdx b/fern/01-guide/02-languages/ruby.mdx new file mode 100644 index 000000000..31867a394 --- /dev/null +++ b/fern/01-guide/02-languages/ruby.mdx @@ -0,0 +1,71 @@ +You can check out this repo: https://github.com/BoundaryML/baml-examples/tree/main/ruby-example + +To set up BAML with Ruby do the following: + + + ### Install BAML VSCode Extension + https://marketplace.visualstudio.com/items?itemName=boundary.baml-extension + + - syntax highlighting + - testing playground + - prompt previews + + ### Install BAML + ```bash bundle + bundle add baml sorbet-runtime + ``` + + ### Add BAML to your existing project + This will give you some starter BAML code in a `baml_src` directory. + + ```bash + bundle exec baml-cli init + ``` + + ### Generate Ruby code from `.baml` files + + ```bash + bundle exec baml-cli generate + ``` + ` + See [What is baml_src](/guide/introduction/baml_src) to learn more about how this works. + + + As fun as writing BAML is, we want you be able to leverage BAML with existing ruby modules. This command gives you a ruby module that is a type-safe interface to every BAML function. + + + Our [VSCode extension](https://marketplace.visualstudio.com/items?itemName=Boundary.baml-extension) automatically runs this command when you save a BAML file. + + + ### Use a BAML function in Ruby! + If `baml_client` doesn't exist, make sure to run the previous step! + + + ```ruby main.rb + require_relative "baml_client/client" + + def example(raw_resume) + # r is an instance of Baml::Types::Resume, defined in baml_client/types + r = Baml.Client.ExtractResume(resume: raw_resume) + + puts "ExtractResume response:" + puts r.inspect + end + + def example_stream(raw_resume) + stream = Baml.Client.stream.ExtractResume(resume: raw_resume) + + stream.each do |msg| + # msg is an instance of Baml::PartialTypes::Resume + # defined in baml_client/partial_types + puts msg.inspect + end + + stream.get_final_response + end + + example 'Grace Hopper created COBOL' + example_stream 'Grace Hopper created COBOL' + ``` + + diff --git a/fern/01-guide/02-languages/typescript.mdx b/fern/01-guide/02-languages/typescript.mdx new file mode 100644 index 000000000..2890c741d --- /dev/null +++ b/fern/01-guide/02-languages/typescript.mdx @@ -0,0 +1,123 @@ +You can check out this repo: https://github.com/BoundaryML/baml-examples/tree/main/nextjs-starter + +To set up BAML with Typescript do the following: + + + ### Install BAML VSCode/Cursor Extension + https://marketplace.visualstudio.com/items?itemName=boundary.baml-extension + + - syntax highlighting + - testing playground + - prompt previews + + ### Install BAML + + ```bash npm + npm install @boundaryml/baml + ``` + + ```bash pnpm + pnpm add @boundaryml/baml + ``` + + ```bash yarn + yarn add @boundaryml/baml + ``` + + ```bash deno + deno install npm:@boundaryml/baml + ``` + + + ### Add BAML to your existing project + This will give you some starter BAML code in a `baml_src` directory. + + + ```bash npm + npx baml-cli init + ``` + + ```bash pnpm + pnpx baml-cli init + ``` + + ```bash yarn + yarn baml-cli init + ``` + + ```bash deno + dpx baml-cli init + ``` + + + ### Generate the `baml_client` typescript package from `.baml` files + + One of the files in your `baml_src` directory will have a [generator block](/ref/baml/generator). This tells BAML how to generate the `baml_client` directory, which will have auto-generated typescript code to call your BAML functions. + + ```bash + npx baml-cli generate + ``` + + You can modify your `package.json` so you have a helper prefix in front of your build command. + + ```json package.json + { + "scripts": { + // Add a new command + "baml-generate": "baml-cli generate", + // Always call baml-generate on every build. + "build": "npm run baml-generate && tsc --build", + } + } + ``` + + See [What is baml_src](/guide/introduction/baml_src) to learn more about how this works. + + + + + If you set up the [VSCode extension](https://marketplace.visualstudio.com/items?itemName=Boundary.baml-extension), it will automatically run `baml-cli generate` on saving a BAML file. + + + ### Use a BAML function in Typescript! + If `baml_client` doesn't exist, make sure to run the previous step! + + + ```typescript index.ts + import {b} from "baml_client" + import type {Resume} from "baml_client/types" + + async function Example(raw_resume: string): Resume { + // BAML's internal parser guarantees ExtractResume + // to be always return a Resume type + const response = await b.ExtractResume(raw_resume); + return response; + } + + async function ExampleStream(raw_resume: string): Resume { + const stream = b.stream.ExtractResume(raw_resume); + for await (const msg of stream) { + console.log(msg) // This will be a Partial type + } + + // This is guaranteed to be a Resume type. + return await stream.get_final_response(); + } + ``` + + ```typescript sync_example.ts + import {b} from "baml_client/sync_client" + import type {Resume} from "baml_client/types" + + function Example(raw_resume: string): Resume { + // BAML's internal parser guarantees ExtractResume + // to be always return a Resume type + const response = b.ExtractResume(raw_resume); + return response; + } + + // Streaming is not available in the sync_client. + + ``` + + \ No newline at end of file diff --git a/fern/01-guide/03-development/deploying/aws.mdx b/fern/01-guide/03-development/deploying/aws.mdx new file mode 100644 index 000000000..a3ab759e7 --- /dev/null +++ b/fern/01-guide/03-development/deploying/aws.mdx @@ -0,0 +1,14 @@ +--- +slug: docs/get-started/deploying/aws +--- + +You can use [SST](https://sst.dev/) to define the Lambda configuration and deploy it. + +The example below builds the BAML x86_64 rust binaries into a Lambda layer and uses the layer in the Lambda function. + +[Example Node + SST Project](https://github.com/BoundaryML/baml-examples/tree/main/node-aws-lambda-sst) + +Let us know if you want to deploy a python BAML project on AWS. Our example project is coming soon. + +### Current limitations +The BAML binaries only support the NodeJS 20.x runtime (or a runtime using Amazon Linux 2023). Let us know if you need a different runtime version. \ No newline at end of file diff --git a/fern/01-guide/03-development/deploying/docker.mdx b/fern/01-guide/03-development/deploying/docker.mdx new file mode 100644 index 000000000..9c7299dc3 --- /dev/null +++ b/fern/01-guide/03-development/deploying/docker.mdx @@ -0,0 +1,30 @@ +--- +slug: docs/get-started/deploying/docker +--- + + +When you develop with BAML, the BAML VScode extension generates a `baml_client` directory (on every save) with all the generated code you need to use your AI functions in your application. + +We recommend you add `baml_client` to your `.gitignore` file to avoid committing generated code to your repository, and re-generate the client code when you build and deploy your application. + +You _could_ commit the generated code if you're starting out to not deal with this, just make sure the VSCode extension version matches your baml package dependency version (e.g. `baml-py` for python and `@boundaryml/baml` for TS) so there are no compatibility issues. + +To build your client you can use the following command. See also [baml-cli generate](/ref/baml-cli/generate): + + + +```dockerfile python Dockerfile +RUN baml-cli generate --from path-to-baml_src +``` + +```dockerfile TypeScript Dockerfile +# Do this early on in the dockerfile script before transpiling to JS +RUN npx baml-cli generate --from path-to-baml_src +``` + +```dockerfile Ruby Dockerfile +RUN bundle add baml +RUN bundle exec baml-cli generate --from path/to/baml_src +``` + + diff --git a/fern/01-guide/03-development/deploying/nextjs.mdx b/fern/01-guide/03-development/deploying/nextjs.mdx new file mode 100644 index 000000000..c8562d495 --- /dev/null +++ b/fern/01-guide/03-development/deploying/nextjs.mdx @@ -0,0 +1,45 @@ +--- +slug: docs/get-started/deploying/nextjs +--- + +To deploy a NextJS with BAML, take a look at the starter template: +https://github.com/BoundaryML/baml-examples/tree/main/nextjs-starter + +All you need is to modify the `nextjs.config.mjs` to allow BAML to run properly: +```JS +/** @type {import('next').NextConfig} */ +const nextConfig = { + experimental: { + serverComponentsExternalPackages: ["@boundaryml/baml"], + }, + webpack: (config, { dev, isServer, webpack, nextRuntime }) => { + config.module.rules.push({ + test: /\.node$/, + use: [ + { + loader: "nextjs-node-loader", + options: { + outputPath: config.output.path, + }, + }, + ], + }); + + return config; + }, +}; + +export default nextConfig; +``` + +and change your `package.json` to build the baml client automatically (and enable logging in dev mode if you want): + +```json + "scripts": { + "dev": "BAML_LOG=info next dev", + "build": "pnpm generate && next build", + "start": "next start", + "lint": "next lint", + "generate": "baml-cli generate --from ./baml_src" + }, +``` \ No newline at end of file diff --git a/fern/01-guide/03-development/deploying/openapi.mdx b/fern/01-guide/03-development/deploying/openapi.mdx new file mode 100644 index 000000000..c6ebc1bfb --- /dev/null +++ b/fern/01-guide/03-development/deploying/openapi.mdx @@ -0,0 +1,340 @@ +--- +slug: docs/get-started/deploying/openapi +--- + + + This feature was added in: v0.55.0. + + + + This page assumes you've gone through the [OpenAPI quickstart]. + + +[OpenAPI quickstart]: /docs/get-started/quickstart/openapi + +To deploy BAML as a RESTful API, you'll need to do three things: + +- host your BAML functions in a Docker container +- update your app to call it +- run BAML and your app side-by-side using `docker-compose` + +Read on to learn how to do this with `docker-compose`. + + + You can also run `baml-cli` in a subprocess from your app directly, and we + may recommend this approach in the future. Please let us know if you'd + like to see instructions for doing so, and in what language, by asking in + [Discord][discord] or [on the GitHub issue][openapi-feedback-github-issue]. + + +## Host your BAML functions in a Docker container + +In the directory containing your `baml_src/` directory, create a +`baml.Dockerfile` to host your BAML functions in a Docker container: + + + BAML-over-HTTP is currently a preview feature. Please provide feedback either + in [Discord][discord] or on [GitHub][openapi-feedback-github-issue] so that + we can stabilize the feature and keep you updated! + + +```docker title="baml.Dockerfile" +FROM node:20 + +WORKDIR /app +COPY baml_src/ . + +# If you want to pin to a specific version (which we recommend): +# RUN npm install -g @boundaryml/baml@VERSION +RUN npm install -g @boundaryml/baml + +CMD baml-cli serve --preview --port 2024 +``` + + + + + +Assuming you intend to run your own application in a container, we recommend +using `docker-compose` to run your app and BAML-over-HTTP side-by-side: + +```bash +docker compose up --build --force-recreate +``` + +```yaml title="docker-compose.yaml" +services: + baml-over-http: + build: + # This will build baml.Dockerfile when you run docker-compose up + context: . + dockerfile: baml.Dockerfile + healthcheck: + test: [ "CMD", "curl", "-f", "http://localhost:2024/_debug/ping" ] + interval: 1s + timeout: 100ms + retries: 3 + # This allows you to 'curl localhost:2024/_debug/ping' from your machine, + # i.e. the Docker host + ports: + - "2024:2024" + + debug-container: + image: amazonlinux:latest + depends_on: + # Wait until the baml-over-http healthcheck passes to start this container + baml-over-http: + condition: service_healthy + command: "curl -v http://baml-over-http:2024/_debug/ping" +``` + + + To call the BAML server from your laptop (i.e. the host machine), you must use + `localhost:2024`. You may only reach it as `baml-over-http:2024` from within + another Docker container. + + + + + + +If you don't care about using `docker-compose`, you can just run: + +```bash +docker build -t baml-over-http -f baml.Dockerfile . +docker run -p 2024:2024 baml-over-http +``` + + + + +To verify for yourself that BAML-over-HTTP is up and running, you can run: + +```bash +curl http://localhost:2024/_debug/ping +``` + +## Update your app to call it + +Update your code to use `BAML_ENDPOINT`, if set, as the BAML address. This +will allow you to point it at `baml-cli` running in another Docker container. + + + + + +```go +import ( + "os" + baml "my-golang-app/baml_client" +) + +func main() { + cfg := baml.NewConfiguration() + bamlEndpoint := os.Getenv("BAML_ENDPOINT") + if bamlEndpoint != "" { + cfg.BasePath = bamlEndpoint + } + b := baml.NewAPIClient(cfg).DefaultAPI + // Use `b` to make API calls +} +``` + + + + +```java +import com.boundaryml.baml_client.ApiClient; +import com.boundaryml.baml_client.ApiException; +import com.boundaryml.baml_client.Configuration; +import com.boundaryml.baml_client.api.DefaultApi; +import com.boundaryml.baml_client.auth.*; + +public class ApiExample { + public static void main(String[] args) { + ApiClient defaultClient = Configuration.getDefaultApiClient(); + String bamlEndpoint = System.getenv("BAML_ENDPOINT"); + if (bamlEndpoint != null && !bamlEndpoint.isEmpty()) { + defaultClient.setBasePath(bamlEndpoint); + } + + DefaultApi apiInstance = new DefaultApi(defaultClient); + // Use `apiInstance` to make API calls + } +} +``` + + + + + +```php +require_once(__DIR__ . '/vendor/autoload.php'); + +$config = BamlClient\Configuration::getDefaultConfiguration(); + +$bamlEndpoint = getenv('BAML_ENDPOINT'); +if ($bamlEndpoint !== false && !empty($bamlEndpoint)) { + $config->setHost($bamlEndpoint); +} + +$apiInstance = new OpenAPI\Client\Api\DefaultApi( + new GuzzleHttp\Client(), + $config +); + +// Use `$apiInstance` to make API calls +``` + + + + +```ruby +require 'baml_client' + +api_client = BamlClient::ApiClient.new +if ENV['BAML_ENDPOINT'] + # Initialize the ApiClient with the custom BAML_ENDPOINT + api_client.config.host = ENV['BAML_ENDPOINT'] +end +b = BamlClient::DefaultApi.new(api_client) +``` + + + +```rust +let mut config = baml_client::apis::configuration::Configuration::default(); +if let Some(base_path) = std::env::var("BAML_ENDPOINT").ok() { + config.base_path = base_path; +} +``` + + + + +## Run your app with docker-compose + +Replace `debug-container` with the Dockerfile for your app in the +`docker-compose.yaml` file: + +```yaml +services: + baml-over-http: + build: + context: . + dockerfile: baml.Dockerfile + networks: + - my-app-network + healthcheck: + test: [ "CMD", "curl", "-f", "http://localhost:2024/_debug/ping" ] + interval: 1s + timeout: 100ms + retries: 3 + ports: + - "2024:2024" + + my-app: + build: + context: . + dockerfile: my-app.Dockerfile + depends_on: + baml-over-http: + condition: service_healthy + environment: + - BAML_ENDPOINT=http://baml-over-http:2024 + + debug-container: + image: amazonlinux:latest + depends_on: + baml-over-http: + condition: service_healthy + command: sh -c 'curl -v "$${BAML_ENDPOINT}/_debug/ping"' + environment: + - BAML_ENDPOINT=http://baml-over-http:2024 +``` + +Additionally, you'll want to make sure that you generate the BAML client at +image build time, because `baml_client/` should not be checked into your repo. + +This means that in the CI workflow you use to push your Docker images, you'll +want to do something like this: + +```yaml .github/workflows/build-image.yaml +jobs: + build: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v2 + - name: Build the BAML client + run: | + set -eux + npx @boundaryml/baml generate + docker build -t my-app . +``` + +## (Optional) Secure your BAML functions + +To secure your BAML server, you can also set a password on it using the +`BAML_PASSWORD` environment variable: + + + + + +```bash +BAML_PASSWORD=sk-baml-your-secret-password \ + baml-cli serve --preview --port 2024 +``` + + + + +```docker +FROM node:20 + +WORKDIR /app +RUN npm install -g @boundaryml/baml +COPY baml_src/ . + +ENV BAML_PASSWORD=sk-baml-your-secret-password +CMD baml-cli serve --preview --port 2024 +``` + + + + +This will require incoming requests to attach your specified password as +authorization metadata. You can verify this by confirming that this returns `403 +Forbidden`: + +```bash +curl -v "http://localhost:2024/_debug/status" +``` + +If you attach your password to the request, you'll see that it now returns `200 OK`: + + + + +```bash +export BAML_PASSWORD=sk-baml-your-secret-password +curl "http://baml:${BAML_PASSWORD}@localhost:2024/_debug/status" +``` + + + +```bash +export BAML_PASSWORD=sk-baml-your-secret-password +curl "http://localhost:2024/_debug/status" -H "X-BAML-API-KEY: ${BAML_PASSWORD}" +``` + + + + + + `BAML_PASSWORD` will secure all endpoints _except_ `/_debug/ping`, so that you + can always debug the reachability of your BAML server. + + +[discord]: https://discord.gg/BTNBeXGuaS +[openapi-feedback-github-issue]: https://github.com/BoundaryML/baml/issues/892 \ No newline at end of file diff --git a/fern/01-guide/03-development/environment-variables.mdx b/fern/01-guide/03-development/environment-variables.mdx new file mode 100644 index 000000000..a115c9e38 --- /dev/null +++ b/fern/01-guide/03-development/environment-variables.mdx @@ -0,0 +1,26 @@ +--- +title: Set Environment Variables +slug: docs/guide/development/environment-variables +--- + + + +## Environment Variables in BAML + +Sometimes you'll see environment variables used in BAML, like in clients: + +```baml + +client GPT4o { + provider baml-openai-chat + options { + model gpt-4o + api_key env.OPENAI_API_KEY + } +} +``` + + + +## Dynamically setting LLM API Keys +You can set the API key for an LLM dynamically by passing in the key as a header or as a parameter (depending on the provider), using the [ClientRegistry](/guide/baml-advanced/llm-client-registry). diff --git a/fern/01-guide/03-development/terminal-logs.mdx b/fern/01-guide/03-development/terminal-logs.mdx new file mode 100644 index 000000000..5aec48e44 --- /dev/null +++ b/fern/01-guide/03-development/terminal-logs.mdx @@ -0,0 +1,35 @@ +--- +slug: /guide/development/terminal-logs +--- +You can add logging to determine what the BAML runtime is doing when it calls LLM endpoints and parses responses. + +To enable logging, set the `BAML_LOG` environment variable: +```sh +# default is warn +BAML_LOG=info +``` + +| Level | Description | +|-------|-------------| +| `error` | Fatal errors by BAML | +| `warn` | Logs any time a function fails (includes LLM calling failures, parsing failures) | +| `info` | Logs every call to a function (including prompt, raw response, and parsed response) | +| `debug` | Requests and detailed parsing errors (warning: may be a lot of logs) | +| `trace` | Everything and more | +| `off` | No logging | + + +Example log: + + +--- + +Since `>0.54.0`: + +To truncate each log entry to a certain length, set the `BOUNDARY_MAX_LOG_CHUNK_CHARS` environment variable: + +```sh +BOUNDARY_MAX_LOG_CHUNK_CHARS=3000 +``` + +This will truncate each part in a log entry to 3000 characters. diff --git a/fern/01-guide/03-development/upgrade-baml-versions.mdx b/fern/01-guide/03-development/upgrade-baml-versions.mdx new file mode 100644 index 000000000..0edfed2d1 --- /dev/null +++ b/fern/01-guide/03-development/upgrade-baml-versions.mdx @@ -0,0 +1,43 @@ +--- +slug: /guide/development/upgrade-baml-versions +title: Upgrading BAML / Fixing Version Mismatches +--- + +Remember that the generated `baml_client` code is generated by your `baml_py` / `@boundaryml/baml` package dependency (using `baml-cli generate`), but can also be generated by the VSCode extension when you save a BAML file. + +**To upgrade BAML versions:** +1. Update the `generator` clause in your `generators.baml` file (or wherever you have it defined) to the new version. If you ran `baml-cli init`, one has already been generated for you! +```baml generators.baml +generator TypescriptGenerator { + output_type "typescript" + .... + // Version of runtime to generate code for (should match the package @boundaryml/baml version) + version "0.62.0" +} +``` + +2. Update your `baml_py` / `@boundaryml/baml` package dependency to the same version. + + + +```sh pip +pip install --upgrade baml-py +``` +```sh npm +npm install @boundaryml/baml@latest +``` + +```sh ruby +gem install baml +``` + + +3. Update VSCode BAML extension to point to the same version. Read here for how to keep VSCode in sync with your `baml_py` / `@boundaryml/baml` package dependency: [VSCode BAML Extension reference](/ref/editor-extension-settings/baml-cli-path) + +You only need to do this for minor version upgrades (e.g., 0.54.0 -> 0.62.0), not patch versions (e.g., 0.62.0 -> 0.62.1). + + + +## Troubleshooting + +See the [VSCode BAML Extension reference](/guide/reference/vscode-ext/clipath) for more information on how to prevent version mismatches. \ No newline at end of file diff --git a/fern/01-guide/04-baml-basics/concurrent-calls.mdx b/fern/01-guide/04-baml-basics/concurrent-calls.mdx new file mode 100644 index 000000000..806e8c130 --- /dev/null +++ b/fern/01-guide/04-baml-basics/concurrent-calls.mdx @@ -0,0 +1,87 @@ +--- +title: Concurrent function calls +slug: /guide/baml-basics/concurrent-calls +--- + + +We’ll use `function ClassifyMessage(input: string) -> Category` for our example: + + +```baml +enum Category { + Refund + CancelOrder + TechnicalSupport + AccountIssue + Question +} + +function ClassifyMessage(input: string) -> Category { + client GPT4o + prompt #" + Classify the following INPUT into ONE + of the following categories: + + INPUT: {{ input }} + + {{ ctx.output_format }} + + Response: + "# +} +``` + + + + + +You can make concurrent `b.ClassifyMessage()` calls like so: + +```python main.py +import asyncio + +from baml_client import b +from baml_client.types import Category + +async def main(): + await asyncio.gather( + b.ClassifyMessage("I want to cancel my order"), + b.ClassifyMessage("I want a refund") + ) + +if __name__ == '__main__': + asyncio.run(main()) +``` + + + + +You can make concurrent `b.ClassifyMessage()` calls like so: + +```ts main.ts +import { b } from './baml_client' +import { Category } from './baml_client/types' +import assert from 'assert' + +const main = async () => { + const category = await Promise.all( + b.ClassifyMessage('I want to cancel my order'), + b.ClassifyMessage('I want a refund'), + ) +} + +if (require.main === module) { + main() +} + +``` + + + + +BAML Ruby (beta) does not currently support async/concurrent calls. + +Please [contact us](/contact) if this is something you need. + + + \ No newline at end of file diff --git a/fern/01-guide/04-baml-basics/error-handling.mdx b/fern/01-guide/04-baml-basics/error-handling.mdx new file mode 100644 index 000000000..e3d37ab99 --- /dev/null +++ b/fern/01-guide/04-baml-basics/error-handling.mdx @@ -0,0 +1,144 @@ + +When BAML raises an exception, it will be an instance of a subclass of `BamlError`. This allows you to catch all BAML-specific exceptions with a single `except` block. + +## Example + +```python Python +from baml_client import b +from baml_py.errors import BamlError, BamlInvalidArgumentError, BamlClientError, BamlClientHttpError, BamlValidationError + +try: + b.CallFunctionThatRaisesError() +except BamlError as e: + print(e) + + +try: + b.CallFunctionThatRaisesError() +except BamlValidationError as e: + # The original prompt sent to the LLM + print(e.prompt) + # The LLM response string + print(e.raw_output) + # A human-readable error message + print(e.message) +``` + + +```typescript TypeScript +import { b } from './baml_client' +// For catching parsing errors, you can import this +import { BamlValidationError } from '@boundaryml/baml' +// The rest of the BAML errors contain a string that is prefixed with: +// "BamlError:" +// Subclasses are sequentially appended to the string. +// For example, BamlInvalidArgumentError is returned as: +// "BamlError: BamlInvalidArgumentError:" +// Or, BamlClientHttpError is returned as: +// "BamlError: BamlClientError: BamlClientHttpError:" + + +async function example() { + try { + await b.CallFunctionThatRaisesError() + } catch (e) { + if (e instanceof BamlValidationError) { + // You should be lenient to these fields missing. + // The original prompt sent to the LLM + console.log(e.prompt) + // The LLM response string + console.log(e.raw_output) + // A human-readable error message + console.log(e.message) + } else { + // Handle other BAML errors + console.log(e) + } + } +} + +``` + +```ruby Ruby +# Example coming soon +``` + + + +## BamlError + +Base class for all BAML exceptions. + + + A human-readable error message. + + +### BamlInvalidArgumentError + +Subclass of `BamlError`. + +Raised when one or multiple arguments to a function are invalid. + +### BamlClientError + +Subclass of `BamlError`. + +Raised when a client fails to return a valid response. + + +In the case of aggregate clients like `fallback` or those with `retry_policy`, only the last client's error is raised. + + +#### BamlClientHttpError + +Subclass of `BamlClientError`. + +Raised when the HTTP request made by a client fails with a non-200 status code. + + + The status code of the response. + +Common status codes are: + +- 1: Other +- 2: Other +- 400: Bad Request +- 401: Unauthorized +- 403: Forbidden +- 404: Not Found +- 429: Too Many Requests +- 500: Internal Server Error + + +### BamlValidationError + +Subclass of `BamlError`. + +Raised when BAML fails to parse a string from the LLM into the specified object. + + + The raw text from the LLM that failed to parse into the expected return type of a function. + + + + The parsing-related error message. + + + + The original prompt that was sent to the LLM, formatted as a plain string. Images sent as base64-encoded strings are not serialized into this field. + diff --git a/fern/01-guide/04-baml-basics/multi-modal.mdx b/fern/01-guide/04-baml-basics/multi-modal.mdx new file mode 100644 index 000000000..03e9fd980 --- /dev/null +++ b/fern/01-guide/04-baml-basics/multi-modal.mdx @@ -0,0 +1,127 @@ +--- +slug: /guide/baml-basics/multi-modal +--- + +## Multi-modal input + +You can use `audio` or `image` input types in BAML prompts. Just create an input argument of that type and render it in the prompt. + +Check the "raw curl" checkbox in the playground to see how BAML translates multi-modal input into the LLM Request body. + +```baml +// "image" is a reserved keyword so we name the arg "img" +function DescribeMedia(img: image) -> string { + client openai/gpt-4o + // Most LLM providers require images or audio to be sent as "user" messages. + prompt #" + {{_.role("user")}} + Describe this image: {{ img }} + "# +} + +// See the "testing functions" Guide for more on testing Multimodal functions +test Test { + args { + img { + url "https://upload.wikimedia.org/wikipedia/en/4/4d/Shrek_%28character%29.png" + } + } +} +``` +See how to [test images in the playground](/guide/baml-basics/testing-functions#images). + +## Calling Multimodal BAML Functions + +#### Images +Calling a BAML function with an `image` input argument type (see [image types](/ref/baml/types#image)) + +The `from_url` and `from_base64` methods create an `Image` object based on input type. + +```python Python +from baml_py import Image +from baml_client import b + +async def test_image_input(): + # from URL + res = await b.TestImageInput( + img=Image.from_url( + "https://upload.wikimedia.org/wikipedia/en/4/4d/Shrek_%28character%29.png" + ) + ) + + # Base64 image + image_b64 = "iVBORw0K...." + res = await b.TestImageInput( + img=Image.from_base64("image/png", image_b64) + ) +``` + +```typescript TypeScript +import { b } from '../baml_client' +import { Image } from "@boundaryml/baml" +... + + // URL + let res = await b.TestImageInput( + Image.fromUrl('https://upload.wikimedia.org/wikipedia/en/4/4d/Shrek_%28character%29.png'), + ) + + // Base64 + const image_b64 = "iVB0R..." + let res = await b.TestImageInput( + Image.fromBase64('image/png', image_b64), + ) + +``` + +```ruby Ruby (beta) +we're working on it! +``` + + + +### Audio +Calling functions that have `audio` types. See [audio types](/ref/baml/types#audio) + + +```python Python +from baml_py import Audio +from baml_client import b + +async def run(): + # from URL + res = await b.TestAudioInput( + img=Audio.from_url( + "https://actions.google.com/sounds/v1/emergency/beeper_emergency_call.ogg" + ) + ) + + # Base64 + b64 = "iVBORw0K...." + res = await b.TestAudioInput( + audio=Audio.from_base64("audio/ogg", b64) + ) +``` + +```typescript TypeScript +import { b } from '../baml_client' +import { Audio } from "@boundaryml/baml" +... + + // URL + let res = await b.TestAudioInput( + Audio.fromUrl('https://actions.google.com/sounds/v1/emergency/beeper_emergency_call.ogg'), + ) + + // Base64 + const audio_base64 = ".." + let res = await b.TestAudioInput( + Audio.fromBase64('audio/ogg', audio_base64), + ) + +``` + +```ruby Ruby (beta) +we're working on it! +``` + diff --git a/fern/01-guide/04-baml-basics/my-first-function.mdx b/fern/01-guide/04-baml-basics/my-first-function.mdx new file mode 100644 index 000000000..2cc52ad77 --- /dev/null +++ b/fern/01-guide/04-baml-basics/my-first-function.mdx @@ -0,0 +1,153 @@ +--- +title: Prompting in BAML +--- + + +We recommend reading the [installation](/guide/installation-language/python) instructions first + + +BAML functions are special definitions that get converted into real code (Python, TS, etc) that calls LLMs. Think of them as a way to define AI-powered functions that are type-safe and easy to use in your application. + +### What BAML Functions Actually Do +When you write a BAML function like this: + +```rust BAML +function ExtractResume(resume_text: string) -> Resume { + client "openai/gpt-4o" + // The prompt uses Jinja syntax.. more on this soon. + prompt #" + Extract info from this text. + + {# special macro to print the output schema + instructions #} + {{ ctx.output_format }} + + Resume: + --- + {{ resume_text }} + --- + "# +} +``` + +BAML converts it into code that: + +1. Takes your input (`resume_text`) +2. Sends a request to OpenAI's GPT-4 API with your prompt. +3. Parses the JSON response into your `Resume` type +4. Returns a type-safe object you can use in your code + +### Prompt Preview + seeing the CURL request +For maximum transparency, you can see the API request BAML makes to the LLM provider using the VSCode extension. +Below you can see the **Prompt Preview**, where you see the full rendered prompt (once you add a test case): + +Prompt preview + +Note how the `{{ ctx.output_format }}` macro is replaced with the output schema instructions. + +The Playground will also show you the **Raw CURL request** (if you click on the "curl" checkbox): + +Raw CURL request + + +Always include the `{{ ctx.output_format }}` macro in your prompt. This injects your output schema into the prompt, which helps the LLM output the right thing. You can also [customize what it prints](/reference/prompt-syntax/ctx-output-format). + +One of our design philosophies is to never hide the prompt from you. You control and can always see the entire prompt. + + +## Calling the function +Recall that BAML will generate a `baml_client` directory in the language of your choice using the parameters in your [`generator`](/ref/baml/generator) config. This contains the function and types you defined. + +Now we can call the function, which will make a request to the LLM and return the `Resume` object: + +```python python +# Import the baml client (We call it `b` for short) +from baml_client import b +# Import the Resume type, which is now a Pydantic model! +from baml_client.types import Resume + +def main(): +resume_text = """Jason Doe\nPython, Rust\nUniversity of California, Berkeley, B.S.\nin Computer Science, 2020\nAlso an expert in Tableau, SQL, and C++\n""" + + # this function comes from the autogenerated "baml_client". + # It calls the LLM you specified and handles the parsing. + resume = b.ExtractResume(resume_text) + + # Fully type-checked and validated! + assert isinstance(resume, Resume) + +``` + +```typescript typescript +import b from 'baml_client' +import { Resume } from 'baml_client/types' + +async function main() { + const resume_text = `Jason Doe\nPython, Rust\nUniversity of California, Berkeley, B.S.\nin Computer Science, 2020\nAlso an expert in Tableau, SQL, and C++` + + // this function comes from the autogenerated "baml_client". + // It calls the LLM you specified and handles the parsing. + const resume = await b.ExtractResume(resume_text) + + // Fully type-checked and validated! + resume.name === 'Jason Doe' + if (resume instanceof Resume) { + console.log('resume is a Resume') + } +} +``` + +```ruby ruby + +require_relative "baml_client/client" +b = Baml.Client + +# Note this is not async +res = b.TestFnNamedArgsSingleClass( + myArg: Baml::Types::Resume.new( + key: "key", + key_two: true, + key_three: 52, + ) +) +``` + + + + + +Do not modify any code inside `baml_client`, as it's autogenerated. + + +## Next steps + +Checkout [PromptFiddle](https://promptfiddle.com) to see various interactive BAML function examples or view the [example prompts](/examples) + +Read the next guide to learn more about choosing different LLM providers and running tests in the VSCode extension. + + + + + + Use any provider or open-source model + + + + + + Test your functions in the VSCode extension + + + + + + Define user or assistant roles in your prompts + + + + + + Use function calling or tools in your prompts + + + + \ No newline at end of file diff --git a/fern/01-guide/04-baml-basics/streaming.mdx b/fern/01-guide/04-baml-basics/streaming.mdx new file mode 100644 index 000000000..6f04c6614 --- /dev/null +++ b/fern/01-guide/04-baml-basics/streaming.mdx @@ -0,0 +1,257 @@ +--- +slug: /guide/baml-basics/streaming +--- + +BAML lets you stream in structured JSON output from LLMs as it comes in. + +If you tried streaming in a JSON output from an LLM you'd see something like: +``` +{"items": [{"name": "Appl +{"items": [{"name": "Apple", "quantity": 2, "price": 1. +{"items": [{"name": "Apple", "quantity": 2, "price": 1.50}], "total_cost": +{"items": [{"name": "Apple", "quantity": 2, "price": 1.50}], "total_cost": 3.00} # Completed +``` + +BAML automatically fixes this partial JSON, and transforms all your types into `Partial` types with all `Optional` fields only during the stream. + +You can check out more examples (including streaming in FastAPI and NextJS) in the [BAML Examples] repo. + +[call BAML functions]: /docs/calling-baml/calling-functions +[BAML Examples]: https://github.com/BoundaryML/baml-examples/tree/main + +Lets stream the output of this function `function ExtractReceiptInfo(email: string) -> ReceiptInfo` for our example: + + + +```rust +class ReceiptItem { + name string + description string? + quantity int + price float +} + +class ReceiptInfo { + items ReceiptItem[] + total_cost float? +} + +function ExtractReceiptInfo(email: string) -> ReceiptInfo { + client GPT4o + prompt #" + Given the receipt below: + + {{ email }} + + {{ ctx.output_format }} + "# +} +``` + + + + + +BAML will generate `b.stream.ExtractReceiptInfo()` for you, which you can use like so: + +```python main.py +import asyncio +from baml_client import b, partial_types, types + +# Using a stream: +def example1(receipt: str): + stream = b.stream.ExtractReceiptInfo(receipt) + + # partial is a Partial type with all Optional fields + for partial in stream: + print(f"partial: parsed {len(partial.items)} items (object: {partial})") + + # final is the full, original, validated ReceiptInfo type + final = stream.get_final_response() + print(f"final: {len(final.items)} items (object: {final})") + +# Using only get_final_response() of a stream +# +# In this case, you should just use b.ExtractReceiptInfo(receipt) instead, +# which is slightly faster and more efficient. +def example2(receipt: str): + final = b.stream.ExtractReceiptInfo(receipt).get_final_response() + print(f"final: {len(final.items)} items (object: {final})") + +# Using the async client: +async def example3(receipt: str): + # Note the import of the async client + from baml_client.async_client import b + stream = b.stream.ExtractReceiptInfo(receipt) + async for partial in stream: + print(f"partial: parsed {len(partial.items)} items (object: {partial})") + + final = await stream.get_final_response() + print(f"final: {len(final.items)} items (object: {final})") + +receipt = """ +04/14/2024 1:05 pm + +Ticket: 220000082489 +Register: Shop Counter +Employee: Connor +Customer: Sam +Item # Price +Guide leash (1 Pair) uni UNI +1 $34.95 +The Index Town Walls +1 $35.00 +Boot Punch +3 $60.00 +Subtotal $129.95 +Tax ($129.95 @ 9%) $11.70 +Total Tax $11.70 +Total $141.65 +""" + +if __name__ == '__main__': + asyncio.run(example1(receipt)) + asyncio.run(example2(receipt)) + asyncio.run(example3(receipt)) +``` + + + +BAML will generate `b.stream.ExtractReceiptInfo()` for you, which you can use like so: + +```ts main.ts +import { b } from './baml_client' + +// Using both async iteration and getFinalResponse() from a stream +const example1 = async (receipt: string) => { + const stream = b.stream.ExtractReceiptInfo(receipt) + + // partial is a Partial type with all Optional fields + for await (const partial of stream) { + console.log(`partial: ${partial.items?.length} items (object: ${partial})`) + } + + // final is the full, original, validated ReceiptInfo type + const final = await stream.getFinalResponse() + console.log(`final: ${final.items.length} items (object: ${final})`) +} + +// Using only async iteration of a stream +const example2 = async (receipt: string) => { + for await (const partial of b.stream.ExtractReceiptInfo(receipt)) { + console.log(`partial: ${partial.items?.length} items (object: ${partial})`) + } +} + +// Using only getFinalResponse() of a stream +// +// In this case, you should just use b.ExtractReceiptInfo(receipt) instead, +// which is faster and more efficient. +const example3 = async (receipt: string) => { + const final = await b.stream.ExtractReceiptInfo(receipt).getFinalResponse() + console.log(`final: ${final.items.length} items (object: ${final})`) +} + +const receipt = ` +04/14/2024 1:05 pm + +Ticket: 220000082489 +Register: Shop Counter +Employee: Connor +Customer: Sam +Item # Price +Guide leash (1 Pair) uni UNI +1 $34.95 +The Index Town Walls +1 $35.00 +Boot Punch +3 $60.00 +Subtotal $129.95 +Tax ($129.95 @ 9%) $11.70 +Total Tax $11.70 +Total $141.65 +` + +if (require.main === module) { + example1(receipt) + example2(receipt) + example3(receipt) +} +``` + + + +BAML will generate `Baml.Client.stream.ExtractReceiptInfo()` for you, +which you can use like so: + +```ruby main.rb +require_relative "baml_client/client" + +$b = Baml.Client + +# Using both iteration and get_final_response() from a stream +def example1(receipt) + stream = $b.stream.ExtractReceiptInfo(receipt) + + stream.each do |partial| + puts "partial: #{partial.items&.length} items" + end + + final = stream.get_final_response + puts "final: #{final.items.length} items" +end + +# Using only iteration of a stream +def example2(receipt) + $b.stream.ExtractReceiptInfo(receipt).each do |partial| + puts "partial: #{partial.items&.length} items" + end +end + +# Using only get_final_response() of a stream +# +# In this case, you should just use BamlClient.ExtractReceiptInfo(receipt) instead, +# which is faster and more efficient. +def example3(receipt) + final = $b.stream.ExtractReceiptInfo(receipt).get_final_response + puts "final: #{final.items.length} items" +end + +receipt = <<~RECEIPT + 04/14/2024 1:05 pm + + Ticket: 220000082489 + Register: Shop Counter + Employee: Connor + Customer: Sam + Item # Price + Guide leash (1 Pair) uni UNI + 1 $34.95 + The Index Town Walls + 1 $35.00 + Boot Punch + 3 $60.00 + Subtotal $129.95 + Tax ($129.95 @ 9%) $11.70 + Total Tax $11.70 + Total $141.65 +RECEIPT + +if __FILE__ == $0 + example1(receipt) + example2(receipt) + example3(receipt) +end +``` + + + + +Streaming is not yet supported via OpenAPI, but it will be coming soon! + + + + + +Number fields are always streamed in only when the LLM completes them. E.g. if the final number is 129.95, you'll only see null or 129.95 instead of partial numbers like 1, 12, 129.9, etc. + \ No newline at end of file diff --git a/fern/01-guide/04-baml-basics/switching-llms.mdx b/fern/01-guide/04-baml-basics/switching-llms.mdx new file mode 100644 index 000000000..ccb950ac3 --- /dev/null +++ b/fern/01-guide/04-baml-basics/switching-llms.mdx @@ -0,0 +1,62 @@ +--- +title: Switching LLMs +slug: guide/baml-basics/switching-llms +--- + +BAML Supports getting structured output from **all** major providers as well as all OpenAI-API compatible open-source models. See [LLM Providers Reference](/ref/llm-client-providers/open-ai) for how to set each one up. + +BAML can help you get structured output from **any Open-Source model**, with better performance than other techniques, even when it's not officially supported via a Tool-Use API (like o1-preview) or fine-tuned for it! [Read more about how BAML does this](https://www.boundaryml.com/blog/schema-aligned-parsing). + + +### Using `client "/"` + +Using `openai/model-name` or `anthropic/model-name` will assume you have the ANTHROPIC_API_KEY or OPENAI_API_KEY environment variables set. + +```rust BAML +function MakeHaiku(topic: string) -> string { + client "openai/gpt-4o" // or anthropic/claude-3-5-sonnet-20240620 + prompt #" + Write a haiku about {{ topic }}. + "# +} +``` + +### Using a named client +Use this if you are using open-source models or need customization +The longer form uses a named client, and supports adding any parameters supported by the provider or changing the temperature, top_p, etc. + +```rust BAML +client MyClient { + provider "openai" + options { + model "gpt-4o" + api_key env.OPENAI_API_KEY + // other params like temperature, top_p, etc. + temperature 0.0 + base_url "https://my-custom-endpoint.com/v1" + // add headers + headers { + "anthropic-beta" "prompt-caching-2024-07-31" + } + } + +} + +function MakeHaiku(topic: string) -> string { + client MyClient + prompt #" + Write a haiku about {{ topic }}. + "# +} +``` + +Consult the [provider documentation](#fields) for a list of supported providers +and models, the default options, and setting [retry policies](/docs/reference/retry-policy). + + +If you want to specify which client to use at runtime, in your Python/TS/Ruby code, +you can use the [client registry](/guide/baml-advanced/llm-client-registry) to do so. + +This can come in handy if you're trying to, say, send 10% of your requests to a +different model. + \ No newline at end of file diff --git a/fern/01-guide/04-baml-basics/testing-functions.mdx b/fern/01-guide/04-baml-basics/testing-functions.mdx new file mode 100644 index 000000000..292edda43 --- /dev/null +++ b/fern/01-guide/04-baml-basics/testing-functions.mdx @@ -0,0 +1,277 @@ +--- +slug: /guide/baml-basics/testing-functions +--- + + +You can test your BAML functions in the VSCode Playground by adding a `test` snippet into a BAML file: + +```baml +enum Category { + Refund + CancelOrder + TechnicalSupport + AccountIssue + Question +} + +function ClassifyMessage(input: string) -> Category { + client GPT4Turbo + prompt #" + ... truncated ... + "# +} + +test Test1 { + functions [ClassifyMessage] + args { + // input is the first argument of ClassifyMessage + input "Can't access my account using my usual login credentials, and each attempt results in an error message stating 'Invalid username or password.' I have tried resetting my password using the 'Forgot Password' link, but I haven't received the promised password reset email." + } +} +``` +See the [interactive examples](https://promptfiddle.com) + +The BAML playground will give you a starting snippet to copy that will match your function signature. + + +BAML doesn't use colons `:` between key-value pairs except in function parameters. + + +
+## Complex object inputs + +Objects are injected as dictionaries +```rust +class Message { + user string + content string +} + +function ClassifyMessage(messages: Messages[]) -> Category { +... +} + +test Test1 { + functions [ClassifyMessage] + args { + messages [ + { + user "hey there" + // multi-line string using the #"..."# syntax + content #" + You can also add a multi-line + string with the hashtags + Instead of ugly json with \n + "# + } + ] + } +} +``` +
+## Test Image Inputs in the Playground + +For a function that takes an image as input, like so: + +```baml +function MyFunction(myImage: image) -> string { + client GPT4o + prompt #" + Describe this image: {{myImage}} + "# +} +``` + +You can define test cases using image files, URLs, or base64 strings. + + + + + + + Committing a lot of images into your repository can make it slow to clone and + pull your repository. If you expect to commit >500MiB of images, please read + [GitHub's size limit documentation][github-large-files] and consider setting + up [large file storage][github-lfs]. + + +[github-large-files]: https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-large-files-on-github +[github-lfs]: https://docs.github.com/en/repositories/working-with-files/managing-large-files/configuring-git-large-file-storage + +```baml +test Test1 { + functions [MyFunction] + args { + myImage { + file "../path/to/image.png" + } + } +} +``` + + + The path to the image file, relative to the directory containing the current BAML file. + + Image files must be somewhere in `baml_src/`. + + + + The mime-type of the image. If not set, and the provider expects a mime-type + to be provided, BAML will try to infer it based on first, the file extension, + and second, the contents of the file. + + + + + +```baml +test Test1 { + functions [MyFunction] + args { + myImage { + url "https...." + } + } +} +``` + + + The publicly accessible URL from which the image may be downloaded. + + + + The mime-type of the image. If not set, and the provider expects a mime-type + to be provided, BAML will try to infer it based on the contents of the file. + + + + + +```baml +test Test1 { + args { + myImage { + base64 "base64string" + media_type "image/png" + } + } +} +``` + + + The base64-encoded image data. + + + + The mime-type of the image. If not set, and the provider expects a mime-type + to be provided, BAML will try to infer it based on the contents of the file. + + If `base64` is a data URL, this field will be ignored. + + + + + +
+## Test Audio Inputs in the Playground + +For a function that takes audio as input, like so: + +```baml +function MyFunction(myAudio: audio) -> string { + client GPT4o + prompt #" + Describe this audio: {{myAudio}} + "# +} +``` + +You can define test cases using audio files, URLs, or base64 strings. + + + + + + + Committing a lot of audio files into your repository can make it slow to clone and + pull your repository. If you expect to commit >500MiB of audio, please read + [GitHub's size limit documentation][github-large-files] and consider setting + up [large file storage][github-lfs]. + + +```baml +test Test1 { + functions [MyFunction] + args { + myAudio { + file "../path/to/audio.mp3" + } + } +} +``` + + + The path to the audio file, relative to the directory containing the current BAML file. + + audio files must be somewhere in `baml_src/`. + + + + The mime-type of the audio. If not set, and the provider expects a mime-type + to be provided, BAML will try to infer it based on first, the file extension, + and second, the contents of the file. + + + + + +```baml +test Test1 { + functions [MyFunction] + args { + myAudio { + url "https...." + } + } +} +``` + + + The publicly accessible URL from which the audio may be downloaded. + + + + The mime-type of the audio. If not set, and the provider expects a mime-type + to be provided, BAML will try to infer it based on the contents of the file. + + + + + +```baml +test Test1 { + args { + myAudio { + base64 "base64string" + media_type "audio/mp3" + } + } +} +``` + + + The base64-encoded audio data. + + + + The mime-type of the audio. If not set, and the provider expects a mime-type + to be provided, BAML will try to infer it based on the contents of the file. + + If `base64` is a data URL, this field will be ignored. + + + + +## Assertions + +This is coming soon! We'll be supporting assertions in test cases. For now -- when you run a test you'll only see errors parsing the output into the right schema, or LLM-provider errors. \ No newline at end of file diff --git a/fern/01-guide/05-baml-advanced/client-registry.mdx b/fern/01-guide/05-baml-advanced/client-registry.mdx new file mode 100644 index 000000000..f89f23ee6 --- /dev/null +++ b/fern/01-guide/05-baml-advanced/client-registry.mdx @@ -0,0 +1,163 @@ +--- +title: Client Registry +--- + +If you need to modify the model / parameters for an LLM client at runtime, you can modify the `ClientRegistry` for any specified function. + + + + + +```python +import os +from baml_py import ClientRegistry + +async def run(): + cr = ClientRegistry() + # Creates a new client + cr.add_llm_client(name='MyAmazingClient', provider='openai', options={ + "model": "gpt-4o", + "temperature": 0.7, + "api_key": os.environ.get('OPENAI_API_KEY') + }) + # Sets MyAmazingClient as the primary client + cr.set_primary('MyAmazingClient') + + # ExtractResume will now use MyAmazingClient as the calling client + res = await b.ExtractResume("...", { "client_registry": cr }) +``` + + + + +```typescript +import { ClientRegistry } from '@boundaryml/baml' + +async function run() { + const cr = new ClientRegistry() + // Creates a new client + cr.addLlmClient({ name: 'MyAmazingClient', provider: 'openai', options: { + model: "gpt-4o", + temperature: 0.7, + api_key: process.env.OPENAI_API_KEY + }}) + // Sets MyAmazingClient as the primary client + cr.setPrimary('MyAmazingClient') + + // ExtractResume will now use MyAmazingClient as the calling client + const res = await b.ExtractResume("...", { clientRegistry: cr }) +} +``` + + + + +```ruby +require_relative "baml_client/client" + +def run + cr = Baml::ClientRegistry.new + + # Creates a new client + cr.add_llm_client( + name: 'MyAmazingClient', + provider: 'openai', + options: { + model: 'gpt-4o', + temperature: 0.7, + api_key: ENV['OPENAI_API_KEY'] + } + ) + + # Sets MyAmazingClient as the primary client + cr.set_primary('MyAmazingClient') + + # ExtractResume will now use MyAmazingClient as the calling client + res = Baml.Client.extract_resume(input: '...', baml_options: { client_registry: cr }) +end + +# Call the asynchronous function +run +``` + + + + +The API supports passing client registry as a field on `__baml_options__` in the request body. + +Example request body: + +```json +{ + "resume": "Vaibhav Gupta", + "__baml_options__": { + "client_registry": { + "clients": [ + { + "name": "OpenAI", + "provider": "openai", + "retry_policy": null, + "options": { + "model": "gpt-4o-mini", + "api_key": "sk-..." + } + } + ], + "primary": "OpenAI" + } + } +} +``` + +```sh +curl -X POST http://localhost:2024/call/ExtractResume \ + -H 'Content-Type: application/json' -d @body.json +``` + + + + + +## ClientRegistry Interface + + + Note: `ClientRegistry` is imported from `baml_py` in Python and `@boundaryml/baml` in TypeScript, not `baml_client`. + + As we mature `ClientRegistry`, we will add a more type-safe and ergonomic interface directly in `baml_client`. See [Github issue #766](https://github.com/BoundaryML/baml/issues/766). + + +Methods use `snake_case` in Python and `camelCase` in TypeScript. + +### add_llm_client / addLlmClient +A function to add an LLM client to the registry. + + + The name of the client. + + + Using the exact same name as a client also defined in .baml files overwrites the existing client whenever the ClientRegistry is used. + + + + + + +The name of a retry policy that is already defined in a .baml file. See [Retry Policies](/docs/snippets/clients/retry). + + +### set_primary / setPrimary +This sets the client for the function to use. (i.e. replaces the `client` property in a function) + + + The name of the client to use. + + This can be a new client that was added with `add_llm_client` or an existing client that is already in a .baml file. + diff --git a/fern/01-guide/05-baml-advanced/dynamic-types.mdx b/fern/01-guide/05-baml-advanced/dynamic-types.mdx new file mode 100644 index 000000000..096a55f56 --- /dev/null +++ b/fern/01-guide/05-baml-advanced/dynamic-types.mdx @@ -0,0 +1,346 @@ +--- +title: Dynamic Types - TypeBuilder +--- + +Sometimes you have **output schemas that change at runtime** -- for example if +you have a list of Categories that you need to classify that come from a +database, or your schema is user-provided. + +`TypeBuilder` is used to create or modify dynamic types at runtime to achieve this. + + +### Dynamic BAML Enums + +Imagine we want to make a categorizer prompt, but the list of categories to output come from a database. +1. Add `@@dynamic` to the class or enum definition to mark it as dynamic in BAML. + +```rust baml +enum Category { + VALUE1 // normal static enum values that don't change + VALUE2 + @@dynamic // this enum can have more values added at runtime +} + +// The Category enum can now be modified at runtime! +function DynamicCategorizer(input: string) -> Category { + client GPT4 + prompt #" + Given a string, classify it into a category + {{ input }} + + {{ ctx.output_format }} + "# +} + +``` + +2. Import the `TypeBuilder` from baml_client in your runtime code and modify `Category`. All dynamic types you +define in BAML will be available as properties of `TypeBuilder`. Think of the +typebuilder as a registry of modified runtime types that the baml function will +read from when building the output schema in the prompt. + + + + +```python +from baml_client.type_builder import TypeBuilder +from baml_client import b + +async def run(): + tb = TypeBuilder() + tb.Category.add_value('VALUE3') + tb.Category.add_value('VALUE4') + # Pass the typebuilder in the baml_options argument -- the last argument of the function. + res = await b.DynamicCategorizer("some input", { "tb": tb }) + # Now res can be VALUE1, VALUE2, VALUE3, or VALUE4 + print(res) + +``` + + + +```typescript +import TypeBuilder from '../baml_client/type_builder' +import { + b +} from '../baml_client' + +async function run() { + const tb = new TypeBuilder() + tb.Category.addValue('VALUE3') + tb.Category.addValue('VALUE4') + const res = await b.DynamicCategorizer("some input", { tb: tb }) + // Now res can be VALUE1, VALUE2, VALUE3, or VALUE4 + console.log(res) +} +``` + + + +```ruby +require_relative '../baml_client' + +def run + tb = Baml::TypeBuilder.new + tb.Category.add_value('VALUE3') + tb.Category.add_value('VALUE4') + res = Baml.Client.dynamic_categorizer(input: "some input", baml_options: {tb: tb}) + # Now res can be VALUE1, VALUE2, VALUE3, or VALUE4 + puts res +end +``` + + + +Dynamic types are not yet supported when used via OpenAPI. + +Please let us know if you want this feature, either via [Discord] or [GitHub][openapi-feedback-github-issue]. + +[Discord]: https://discord.gg/BTNBeXGuaS +[openapi-feedback-github-issue]: https://github.com/BoundaryML/baml/issues/892 + + + + + + +### Dynamic BAML Classes +Now we'll add some properties to a `User` class at runtime using @@dynamic. + + +```rust BAML +class User { + name string + age int + @@dynamic +} + +function DynamicUserCreator(user_info: string) -> User { + client GPT4 + prompt #" + Extract the information from this chunk of text: + "{{ user_info }}" + + {{ ctx.output_format }} + "# +} +``` + +We can then modify the `User` schema at runtime. Since we marked `User` with `@@dynamic`, it'll be available as a property of `TypeBuilder`. + + + +```python Python +from baml_client.type_builder import TypeBuilder +from baml_client import b + +async def run(): + tb = TypeBuilder() + tb.User.add_property('email', tb.string()) + tb.User.add_property('address', tb.string()).description("The user's address") + res = await b.DynamicUserCreator("some user info", { "tb": tb }) + # Now res can have email and address fields + print(res) + +``` + +```typescript TypeScript +import TypeBuilder from '../baml_client/type_builder' +import { + b +} from '../baml_client' + +async function run() { + const tb = new TypeBuilder() + tb.User.add_property('email', tb.string()) + tb.User.add_property('address', tb.string()).description("The user's address") + const res = await b.DynamicUserCreator("some user info", { tb: tb }) + // Now res can have email and address fields + console.log(res) +} +``` + +```ruby Ruby +require_relative 'baml_client/client' + +def run + tb = Baml::TypeBuilder.new + tb.User.add_property('email', tb.string) + tb.User.add_property('address', tb.string).description("The user's address") + + res = Baml.Client.dynamic_user_creator(input: "some user info", baml_options: {tb: tb}) + # Now res can have email and address fields + puts res +end +``` + + +### Creating new dynamic classes or enums not in BAML +The previous examples showed how to modify existing types. Here we create a new `Hobbies` enum, and a new class called `Address` without having them defined in BAML. + +Note that you must attach the new types to the existing Return Type of your BAML function(in this case it's `User`). + + + +```python Python +from baml_client.type_builder import TypeBuilder +from baml_client.async_client import b + +async def run(): + tb = TypeBuilder() + hobbies_enum = tb.add_enum("Hobbies") + hobbies_enum.add_value("Soccer") + hobbies_enum.add_value("Reading") + + address_class = tb.add_class("Address") + address_class.add_property("street", tb.string()).description("The user's street address") + + tb.User.add_property("hobby", hobbies_enum.type().optional()) + tb.User.add_property("address", address_class.type().optional()) + res = await b.DynamicUserCreator("some user info", {"tb": tb}) + # Now res might have the hobby property, which can be Soccer or Reading + print(res) + +``` + +```typescript TypeScript +import TypeBuilder from '../baml_client/type_builder' +import { b } from '../baml_client' + +async function run() { + const tb = new TypeBuilder() + const hobbiesEnum = tb.addEnum('Hobbies') + hobbiesEnum.addValue('Soccer') + hobbiesEnum.addValue('Reading') + + const addressClass = tb.addClass('Address') + addressClass.addProperty('street', tb.string()).description("The user's street address") + + + tb.User.addProperty('hobby', hobbiesEnum.type().optional()) + tb.User.addProperty('address', addressClass.type()) + const res = await b.DynamicUserCreator("some user info", { tb: tb }) + // Now res might have the hobby property, which can be Soccer or Reading + console.log(res) +} +``` + +```ruby Ruby +require_relative 'baml_client/client' + +def run + tb = Baml::TypeBuilder.new + hobbies_enum = tb.add_enum('Hobbies') + hobbies_enum.add_value('Soccer') + hobbies_enum.add_value('Reading') + + address_class = tb.add_class('Address') + address_class.add_property('street', tb.string) + + tb.User.add_property('hobby', hobbies_enum.type.optional) + tb.User.add_property('address', address_class.type.optional) + + res = Baml::Client.dynamic_user_creator(input: "some user info", baml_options: { tb: tb }) + # Now res might have the hobby property, which can be Soccer or Reading + puts res +end +``` + + + +TypeBuilder provides methods for building different kinds of types: + +| Method | Description | Example | +|--------|-------------|---------| +| `string()` | Creates a string type | `tb.string()` | +| `int()` | Creates an integer type | `tb.int()` | +| `float()` | Creates a float type | `tb.float()` | +| `bool()` | Creates a boolean type | `tb.bool()` | +| `list()` | Makes a type into a list | `tb.string().list()` | +| `optional()` | Makes a type optional | `tb.string().optional()` | + +### Adding descriptions to dynamic types + + + +```python Python +tb = TypeBuilder() +tb.User.add_property("email", tb.string()).description("The user's email") +``` + +```typescript TypeScript +const tb = new TypeBuilder() +tb.User.addProperty("email", tb.string()).description("The user's email") +``` + +```ruby Ruby +tb = Baml::TypeBuilder.new +tb.User.add_property("email", tb.string).description("The user's email") +``` + + + +### Building dynamic types from JSON schema + +We have a working implementation of this, but are waiting for a concrete use case to merge it. +Please chime in on [the GitHub issue](https://github.com/BoundaryML/baml/issues/771) if this is +something you'd like to use. + + + +```python Python +import pydantic +from baml_client import b + +class Person(pydantic.BaseModel): + last_name: list[str] + height: Optional[float] = pydantic.Field(description="Height in meters") + +tb = TypeBuilder() +tb.unstable_features.add_json_schema(Person.model_json_schema()) + +res = await b.ExtractPeople( + "My name is Harrison. My hair is black and I'm 6 feet tall. I'm pretty good around the hoop. I like giraffes.", + {"tb": tb}, +) +``` + +```typescript TypeScript +import 'z' from zod +import 'zodToJsonSchema' from zod-to-json-schema +import { b } from '../baml_client' + +const personSchema = z.object({ + animalLiked: z.object({ + animal: z.string().describe('The animal mentioned, in singular form.'), + }), + hobbies: z.enum(['chess', 'sports', 'music', 'reading']).array(), + height: z.union([z.string(), z.number().int()]).describe('Height in meters'), +}) + +let tb = new TypeBuilder() +tb.unstableFeatures.addJsonSchema(zodToJsonSchema(personSchema, 'Person')) + +const res = await b.ExtractPeople( + "My name is Harrison. My hair is black and I'm 6 feet tall. I'm pretty good around the hoop. I like giraffes.", + { tb }, +) +``` + +```ruby Ruby +tb = Baml::TypeBuilder.new +tb.unstable_features.add_json_schema(...) + +res = Baml::Client.extract_people( + input: "My name is Harrison. My hair is black and I'm 6 feet tall. I'm pretty good around the hoop. I like giraffes.", + baml_options: { tb: tb } +) + +puts res +``` + + +### Testing dynamic types in BAML +This feature is coming soon! Let us know if you're interested in testing it out! + +You can still write tests in Python, TypeScript, Ruby, etc in the meantime. \ No newline at end of file diff --git a/fern/01-guide/05-baml-advanced/prompt-caching.mdx b/fern/01-guide/05-baml-advanced/prompt-caching.mdx new file mode 100644 index 000000000..45db6de9c --- /dev/null +++ b/fern/01-guide/05-baml-advanced/prompt-caching.mdx @@ -0,0 +1,70 @@ +--- +title: Prompt Caching / Message Role Metadata +--- + +Recall that an LLM request usually looks like this, where it sometimes has metadata in each `message`. In this case, Anthropic has a `cache_control` key. + +```curl {3,11} Anthropic Request +curl https://api.anthropic.com/v1/messages \ + -H "content-type: application/json" \ + -H "anthropic-beta: prompt-caching-2024-07-31" \ + -d '{ + "model": "claude-3-5-sonnet-20241022", + "max_tokens": 1024, + "messages": [ + { + "type": "text", + "text": "", + "cache_control": {"type": "ephemeral"} + }, + { + "role": "user", + "content": "Analyze the major themes in Pride and Prejudice." + } + ] + }' +``` + + +This is nearly the same as this BAML code, minus the `cache_control` metadata: + + +Let's add the `cache-control` metadata to each of our messges in BAML now. +There's just 2 steps: + + +### Allow role metadata and header in the client definition +```baml {5-8} main.baml +client AnthropicClient { + provider "anthropic" + options { + model "claude-3-5-sonnet-20241022" + allowed_role_metadata ["cache_control"] + headers { + "anthropic-beta" "prompt-caching-2024-07-31" + } + } +} +``` + +### Add the metadata to the messages +```baml {2,6} main.baml +function AnalyzeBook(book: string) -> string { + client AnthropicClient + prompt #" + {{ _.role("user") }} + {{ book }} + {{ _.role("user", cache_control={"type": "ephemeral"}) }} + Analyze the major themes in Pride and Prejudice. + "# +} +``` + + + +We have the "allowed_role_metadata" so that if you swap to other LLM clients, we don't accidentally forward the wrong metadata to the new provider API. + + + +Remember to check the "raw curl" checkbox in the VSCode Playground to see the exact request being sent! + \ No newline at end of file diff --git a/fern/01-guide/05-baml-advanced/reusing-prompt-snippets.mdx b/fern/01-guide/05-baml-advanced/reusing-prompt-snippets.mdx new file mode 100644 index 000000000..ad4eed723 --- /dev/null +++ b/fern/01-guide/05-baml-advanced/reusing-prompt-snippets.mdx @@ -0,0 +1,41 @@ +--- +title: Reusing Prompt Snippets +--- + + +Writing prompts requires a lot of string manipulation. BAML has a `template_string` to let you combine different string templates together. Under-the-hood they use [jinja](/ref/prompt-syntax/what-is-jinja) to evaluate the string and its inputs. + +**Template Strings are functions that always return a string.** They can be used to define reusable parts of a prompt, or to make the prompt more readable by breaking it into smaller parts. + +Example +```baml BAML +// Inject a list of "system" or "user" messages into the prompt. +// Note the syntax -- there are no curlies. Just a string block. +template_string PrintMessages(messages: Message[]) #" + {% for m in messages %} + {{ _.role(m.role) }} + {{ m.message }} + {% endfor %} +"# + +function ClassifyConversation(messages: Message[]) -> Category[] { + client GPT4Turbo + prompt #" + Classify this conversation: + {{ PrintMessages(messages) }} + + Use the following categories: + {{ ctx.output_format}} + "# +} +``` + +In this example we can call the template_string `PrintMessages` to subdivide the prompt into "user" or "system" messages using `_.role()` (see [message roles](/ref/prompt-syntax/role)). This allows us to reuse the logic for printing messages in multiple prompts. + +You can nest as many template strings inside each other and call them however many times you want. + + + The BAML linter may give you a warning when you use template strings due to a static analysis limitation. You can ignore this warning. If it renders in the playground, you're good! + +Use the playground preview to ensure your template string is being evaluated correctly! + diff --git a/fern/01-guide/05-baml-advanced/validations.mdx b/fern/01-guide/05-baml-advanced/validations.mdx new file mode 100644 index 000000000..54cc426c4 --- /dev/null +++ b/fern/01-guide/05-baml-advanced/validations.mdx @@ -0,0 +1,336 @@ + +With custom type validations, you can set specific rules to ensure your data's +value falls within an acceptable range. + +BAML provides two types of validations: +- **`@assert`** for strict validations. If a type fails an `@assert` validation, it + will not be returned the response. If the failing assertion was part of the + top-level type, it will raise an exception. If it's part of a container, it + will be removed from the container. +- **`@check`** for non-exception-raising validations. Whether a `@check` passes or + fails, the data will be returned. You can access the results of invidividual + checks in the response data. + +## Assertions + +Assertions are used to guarantee properties about a type or its components in a response. +They can be written directly as inline attributes next to the field +definition or on the line following the field definition, or on a top-level type used +in a function declaration. + +### Using `@assert` + +BAML will raise an exception if a function returns a `Foo` where `Foo.bar` +is not between 0 and 10. + +If the function `NextInt8` returns `128`, BAML will raise an exception. + +```baml BAML +class Foo { + bar int @assert(between_0_and_10, {{ this > 0 and this < 10 }}) //this = Foo.bar value +} + +function NextInt8(a: int) -> int @assert(ok_int8, {{ this >= -128 and this < 127 }}) { + client GPT4 + prompt #"Return the number after {{ a }}"# +} +``` + + +### Using `@assert` with `Union` Types + +Note that when using [`Unions`](/ref/baml/types#union-), it is +crucial to specify where the `@assert` attribute is applied within the union +type, as it is not known until runtime which type the value will be. + +```baml BAML +class Foo { + bar (int @assert(positive, {{ this > 0 }}) | bool @assert(is_true, {{ this }})) +} +``` + +In the above example, the `@assert` attribute is applied specifically to the +`int` and `string` instances of the `Union`, rather than to the `Foo.bar` field +as a whole. + +Likewise, the keyword `this` refers to the value of the type instance it is +directly associated with (e.g., `int` or `string`). + +## Chaining Assertions +You can have multiple assertions on a single field by chaining multiple `@assert` attributes. + +In this example, the asserts on `bar` and `baz` are equivalent. +```baml BAML +class Foo { + bar int @assert(between_0_and_10, {{ this > 0 and this < 10 }}) + baz int @assert(positive, {{ this > 0 }}) @assert(less_than_10, {{ this < 10 }}) +} +``` + +Chained asserts are evaluated in order from left to right. If the first assert +fails, the second assert will not be evaluated. + +## Writing Assertions + +Assertions are represented as Jinja expressions and can be used to validate +various types of data. Possible constraints include checking the length of a +string, comparing two values, or verifying the presence of a substring with +regular expressions. + +In the future, we plan to support shorthand syntax for common assertions to make +writing them easier. + +For now, see our [Jinja cookbook / guide](/ref/prompt-syntax/what-is-jinja) +or the [Minijinja filters docs](https://docs.rs/minijinja/latest/minijinja/filters/index.html#functions) +for more information on writing expressions. + + + +### Expression keywords + +- `this` refers to the value of the current field being validated. + + +`this.field` is used to refer to a specific field within the context of `this`. +Access nested fields of a data type by chaining the field names together with a `.` as shown below. +```baml BAML +class Resume { + name string + experience string[] + +} + +class Person { + resume Resume @assert({{ this.experience|length > 0 }}, "Nonzero experience") + person_name name +} +``` + +## Assertion Errors + +When validations fail, your BAML function will raise a `BamlValidationError` +exception, same as when parsing fails. You can catch this exception and handle +it as you see fit. + +You can define custom names for each assertion, which will be included +in the exception for that failure case. If you don't define a custom name, +BAML will display the body of the assert expression. + +In this example, if the `quote` field is empty, BAML raises a +`BamlValidationError` with the message **"exact_citation_not_found"**. If the +`website_link` field does not contain **"https://",** it raises a +`BamlValidationError` with the message **invalid_link**. + +```baml BAML +class Citation { + //@assert(, ) + quote string @assert(exact_citation_found, + {{ this|length > 0 }} + ) + + website_link string @assert(valid_link, + {{ this|regex_match("https://") }} + ) +} +``` + + + +```python Python +from baml_client import b +from baml_client.types import Citation + +def main(): + try: + citation: Citation = b.GetCitation("SpaceX, is an American spacecraft manufacturer, launch service provider...") + + # Access the value of the quote field + quote = citation.quote + website_link = citation.website_link + print(f"Quote: {quote} from {website_link}") + + except BamlValidationError as e: + print(f"Validation error: {str(e)}") + except Exception as e: + print(f"An unexpected error occurred: {e}") + +``` + +```typescript Typescript +import { b, BamlValidationError } from './baml_client'; +import { Citation } from './baml_client/types'; + +const main = () => { + try { + const citation = b.GetCitation("SpaceX, is an American spacecraft manufacturer, launch service provider..."); + + const quote = citation.quote.value; + console.log(`Quote: ${quote}`); + + const checks = citation.quote.checks; + console.log(`Check exact_citation_found: ${checks.exact_citation_found.status}`); + for (const check of get_checks(checks)) { + console.log(`Check ${check.name}: ${check.status}`); + } + + const author = citation.author; + console.log(`Author: ${author}`); + } catch (e) { + if (e instanceof BamlValidationError) { + console.log(`Validation error: ${e}`); + } else { + console.error(e); + } + } +}; +``` + + + + +## Checks + +`@check` attributes add validations without raising exceptions if they fail. +Types with `@check` attributes allow the validations to be inspected at +runtime. + + +```baml BAML +( bar int @check(less_than_zero, {{ this < 0 }}) )[] +``` + + +```python Python +List[baml_py.Checked[int, Dict[Literal["less_than_zero"]]]] +``` + +```typescript Typescript +Checked[] +``` + + + +The following example uses both `@check` and `@assert`. If `line_number` fails its +`@assert`, no `Citation` will be returned by `GetCitation()`. However, +`exact_citation_not_found` can fail without interrupting the result. Because it +was a `@check`, client code can inspect the result of the check. + + +```baml BAML +class Citation { + quote string @check( + exact_citation_match, + {{ this|length > 0 }} + ) + line_number string @assert( + has_line_number + {{ this|length >= 0 }} + ) +} + +function GetCitation(full_text: string) -> Citation { + client GPT4 + prompt #" + Generate a citation of the text below in MLA format: + {{full_text}} + + {{ctx.output_format}} + "# +} + +``` + + +```python Python +from baml_client import b +from baml_client.types import Citation + +def main(): + citation = b.GetCitation("SpaceX, is an American spacecraft manufacturer, launch service provider...") + + # Access the value of the quote field + quote = citation.quote.value + print(f"Quote: {quote}") + + # Access a particular check. + quote_match_check = citation.quote.checks['exact_citation_match'].status + print(f"Citation match status: {quote_match_check})") + + # Access each check and its status. + for check in baml_py.get_checks(citation.quote.checks): + print(f"Check {check.name}: {check.status}") +``` + +```typescript Typescript +import { b } from './baml_client' +import { Citation } from './baml_client/types' +import { get_checks } from '@boundaryml/baml/checked' + +const main = () => { + const citation = b.GetCitation("SpaceX, is an American spacecraft manufacturer, launch service provider...") + + // Access the value of the quote field + const quote = citation.quote.value + console.log(`Quote: ${quote}`) + + // Access a particular check. + const quote_match_check = citation.quote.checks.exact_citation_match.status; + console.log(`Exact citation status: ${quote_match_check}`); + + // Access each check and its status. + for (const check of get_checks(citation.quote.checks)) { + console.log(`Check: ${check.name}, Status: ${check.status}`) + } +} +``` + + + + +You can also chain multiple `@check` and `@assert` attributes on a single field. + +```baml BAML +class Foo { + bar string @check(bar_nonempty, {{ this|length > 0 }}) + @assert(bar_no_foo, {{ this|contains("foo") }}) + @check(bar_no_fizzle, {{ this|contains("fizzle") }}) + @assert(bar_no_baz, {{ this|contains("baz") }}) +} +``` + + When using `@check`, all checks on the response data are evaluated even if +one fails. In contrast, with `@assert`, a failure will stop the parsing process +and immediately raise an exception. + + +## Advanced Example + +The following example shows more complex minijinja expressions, see the +[Minijinja filters docs](https://docs.rs/minijinja/latest/minijinja/filters/index.html#functions) +for more information on available operators to use in your assertions. + +-------- + +The `Book` and `Library` classes below demonstrate how to validate a book's +title, author, ISBN, publication year, genres, and a library's name and books. +The block-level assertion in the `Library` class ensures that all books have +unique ISBNs. + +```baml BAML +class Book { + title string @assert(this|length > 0) + author string @assert(this|length > 0) + isbn string @assert( + {{ this|regex_match("^(97(8|9))?\d{9}(\d|X)$") }}, + "Invalid ISBN format" + ) + publication_year int @assert(valid_pub_year, {{ 1000 <= this <= 2100 }}) + genres string[] @assert(valid_length, {{ 1 <= this|length <= 10 }}) +} + +class Library { + name string + books Book[] @assert(nonempty_books, {{ this|length > 0 }}) + @assert(unique_isbn, {{ this|map(attribute='isbn')|unique()|length == this|length }} ) +} +``` diff --git a/fern/01-guide/06-prompt-engineering/chain-of-thought.mdx b/fern/01-guide/06-prompt-engineering/chain-of-thought.mdx new file mode 100644 index 000000000..64e270f52 --- /dev/null +++ b/fern/01-guide/06-prompt-engineering/chain-of-thought.mdx @@ -0,0 +1,75 @@ +--- +title: Chain of Thought Prompting +--- + +Chain of thought prompting is a technique that encourages the language model to think step by step, reasoning through the problem before providing an answer. This can improve the quality of the response and make it easier to understand. + + +In the below example, we use chain of thought prompting to extract information from an email. + +BAML will still parse the response as an `OrderInfo` object, even though there is additional text in the response. +```baml +class Email { + subject string + body string + from_address string +} + + +class OrderInfo { + order_status "ORDERED" | "SHIPPED" | "DELIVERED" | "CANCELLED" + tracking_number string? + estimated_arrival_date string? +} + +function GetOrderInfo(email: Email) -> OrderInfo { + client GPT4o + prompt #" + Extract the info from this email in the INPUT: + + INPUT: + ------- + from: {{email.from_address}} + Email Subject: {{email.subject}} + Email Body: {{email.body}} + ------- + + {{ ctx.output_format }} + + Before you output the JSON, please explain your + reasoning step-by-step. Here is an example on how to do this: + 'If we think step by step we can see that ... + therefore the output JSON is: + { + ... the json schema ... + }' + "# +} + +test Test1 { + functions [GetOrderInfo] + args { + email { + from_address "hello@amazon.com" + subject "Your Amazon.com order of 'Wood Dowel Rods...' has shipped!" + body #" + Hi Sam, your package will arrive: + Thurs, April 4 + Track your package: + www.amazon.com/gp/your-account/ship-track?ie=23&orderId123 + + On the way: + Wood Dowel Rods... + Order #113-7540940 + Ship to: + Sam + SEATTLE, WA + + Shipment total: + $0.00 + "# + + } + } +} +``` diff --git a/fern/01-guide/06-prompt-engineering/chat-history.mdx b/fern/01-guide/06-prompt-engineering/chat-history.mdx new file mode 100644 index 000000000..8d953d041 --- /dev/null +++ b/fern/01-guide/06-prompt-engineering/chat-history.mdx @@ -0,0 +1,111 @@ +--- +title: Chat +--- + +In this guide we'll build a small chatbot that takes in user messages and generates responses. + + +```baml chat-history.baml +class MyUserMessage { + role "user" | "assistant" + content string +} + +function ChatWithLLM(messages: MyUserMessage[]) -> string { + client "openai/gpt-4o" + prompt #" + Answer the user's questions based on the chat history: + {% for message in messages %} + {{ _.role(message.role) }} + {{ message.content }} + {% endfor %} + + Answer: + "# +} + +test TestName { + functions [ChatWithLLM] + args { + messages [ + { + role "user" + content "Hello!" + } + { + role "assistant" + content "Hi!" + } + ] + } +} + +``` + +#### Code + +```python Python +from baml_client import b +from baml_client.types import MyUserMessage + +def main(): + messages: list[MyUserMessage] = [] + + while True: + content = input("Enter your message (or 'quit' to exit): ") + if content.lower() == 'quit': + break + + messages.append(MyUserMessage(role="user", content=content)) + + agent_response = b.ChatWithLLM(messages=messages) + print(f"AI: {agent_response}") + print() + + # Add the agent's response to the chat history + messages.append(MyUserMessage(role="assistant", content=agent_response)) + +if __name__ == "__main__": + main() +``` +```typescript Typescript +import { b, MyUserMessage } from 'baml_client'; +import * as readline from 'readline'; + +const rl = readline.createInterface({ + input: process.stdin, + output: process.stdout +}); + +const messages: MyUserMessage[] = []; + +function askQuestion(query: string): Promise { + return new Promise((resolve) => { + rl.question(query, resolve); + }); +} + +async function main() { + + while (true) { + const content = await askQuestion("Enter your message (or 'quit' to exit): "); + if (content.toLowerCase() === 'quit') { + break; + } + + messages.push({ role: "user", content }); + + const agentResponse = await b.ChatWithLLM({ messages }); + console.log(`AI: ${agentResponse}`); + console.log(); + + // Add the agent's response to the chat history + messages.push({ role: "assistant", content: agentResponse }); + } + + rl.close(); +} + +main(); +``` + diff --git a/fern/01-guide/06-prompt-engineering/hallucinations.mdx b/fern/01-guide/06-prompt-engineering/hallucinations.mdx new file mode 100644 index 000000000..ebec43e07 --- /dev/null +++ b/fern/01-guide/06-prompt-engineering/hallucinations.mdx @@ -0,0 +1,79 @@ +--- +title: Reduce Hallucinations +--- + +We recommend these simple ways to reduce hallucinations: + + +### 1. Set temperature to 0.0 (especially if extracting data verbatim) +This will make the model less creative and more likely to just extract the data that you want verbatim. +```baml clients.baml +client MyClient { + provider openai + options { + temperature 0.0 + } +} +``` + +### 2. Reduce the number of input tokens +Reduce the amount of data you're giving the model to process to reduce confusion. + +Prune as much data as possible, or split your prompt into multiple prompts analyzing subsets of the data. + +If you're processing `images`, try cropping the parts of the image that you don't need. LLMs can only handle images of certain sizes, so every pixel counts. Make sure you resize images to the model's input size (even if the provider does the resizing for you), so you can gauge how clear the image is at the model's resolution. You'll notice the blurrier the image is, the higher the hallucination rate. + +Let us know if you want more tips for processing images, we have some helper prompts we can share with you, or help debug your prompt. + + + +### 2. Use reasoning or reflection prompting +Read our [chain-of-thought guide](/01-guide/06-prompt-engineering/chain-of-thought) for more. + +### 3. Watch out for contradictions and word associations + +Each word you add into the prompt will cause it to associate it with something it saw before in its training data. This is why we have techniques like [symbol tuning](/01-guide/06-prompt-engineering/symbol-tuning) to help control this bias. + + +Let's say you have a prompt that says: +``` +Answer in this JSON schema: + + + +But when you answer, add some comments in the JSON indicating your reasoning for the field like this: + +Example: +--- +{ + // I used the name "John" because it's the name of the person who wrote the prompt + "name": "John" +} + +JSON: +``` + +The LLM may not write the `// comment` inline, because it's been trained to associate JSON with actual "valid" JSON. + +You can get around this with some more coaxing like: +```text {12,13} +Answer in this JSON schema: + + + +But when you answer, add some comments in the JSON indicating your reasoning for the field like this: +--- +{ + // I used the name "John" because it's the name of the person who wrote the prompt + "name": "John" +} + +It's ok if this isn't fully valid JSON, +we will fix it afterwards and remove the comments. + +JSON: +``` + +The LLM made an assumption that you want "JSON" -- which doesn't use comments -- and our instructions were not explicit enough to override that bias originally. + +Keep on reading for more tips and tricks! Or reach out in our Discord diff --git a/fern/01-guide/06-prompt-engineering/rag.mdx b/fern/01-guide/06-prompt-engineering/rag.mdx new file mode 100644 index 000000000..e69de29bb diff --git a/fern/01-guide/06-prompt-engineering/symbol-tuning.mdx b/fern/01-guide/06-prompt-engineering/symbol-tuning.mdx new file mode 100644 index 000000000..c20969d18 --- /dev/null +++ b/fern/01-guide/06-prompt-engineering/symbol-tuning.mdx @@ -0,0 +1,48 @@ +--- +title: Creating a Classification Function with Symbol Tuning +--- + +Aliasing field names to abstract symbols like "k1", "k2", etc. can improve classification results. This technique, known as symbol tuning, helps the LLM focus on your descriptions rather than being biased by the enum or property names themselves. + +See the paper [Symbol Tuning Improves In-Context Learning in Language Models](https://arxiv.org/abs/2305.08298) for more details. + +```baml +enum MyClass { + Refund @alias("k1") + @description("Customer wants to refund a product") + + CancelOrder @alias("k2") + @description("Customer wants to cancel an order") + + TechnicalSupport @alias("k3") + @description("Customer needs help with a technical issue unrelated to account creation or login") + + AccountIssue @alias("k4") + @description("Specifically relates to account-login or account-creation") + + Question @alias("k5") + @description("Customer has a question") +} + +function ClassifyMessageWithSymbol(input: string) -> MyClass { + client GPT4o + + prompt #" + Classify the following INPUT into ONE + of the following categories: + + INPUT: {{ input }} + + {{ ctx.output_format }} + + Response: + "# +} + +test Test1 { + functions [ClassifyMessageWithSymbol] + args { + input "I can't access my account using my login credentials. I havent received the promised reset password email. Please help." + } +} +``` \ No newline at end of file diff --git a/fern/01-guide/06-prompt-engineering/tools.mdx b/fern/01-guide/06-prompt-engineering/tools.mdx new file mode 100644 index 000000000..a736ffc57 --- /dev/null +++ b/fern/01-guide/06-prompt-engineering/tools.mdx @@ -0,0 +1,258 @@ +--- +title: Tools / Function Calling +--- + + +"Function calling" is a technique for getting an LLM to choose a function to call for you. + +The way it works is: +1. You define a task with certain function(s) +2. Ask the LLM to **choose which function to call** +3. **Get the function parameters from the LLM** for the appropriate function it choose +4. **Call the functions** in your code with those parameters + +In BAML, you can get represent a `tool` or a `function` you want to call as a BAML `class`, and make the function output be that class definition. + +```baml BAML +class WeatherAPI { + city string @description("the user's city") + timeOfDay string @description("As an ISO8601 timestamp") +} + +function UseTool(user_message: string) -> WeatherAPI { + client GPT4Turbo + prompt #" + Extract the info from this message + --- + {{ user_message }} + --- + + {# special macro to print the output schema. #} + {{ ctx.output_format }} + + JSON: + "# +} +``` +Call the function like this: + + +```python Python +import asyncio +from baml_client import b +from baml_client.types import WeatherAPI + +def main(): + weather_info = b.UseTool("What's the weather like in San Francisco?") + print(weather_info) + assert isinstance(weather_info, WeatherAPI) + print(f"City: {weather_info.city}") + print(f"Time of Day: {weather_info.timeOfDay}") + +if __name__ == '__main__': + main() +``` + +```typescript TypeScript +import { b } from './baml_client' +import { WeatherAPI } from './baml_client/types' +import assert from 'assert' + +const main = async () => { + const weatherInfo = await b.UseTool("What's the weather like in San Francisco?") + console.log(weatherInfo) + assert(weatherInfo instanceof WeatherAPI) + console.log(`City: ${weatherInfo.city}`) + console.log(`Time of Day: ${weatherInfo.timeOfDay}`) +} +``` + +```ruby Ruby +require_relative "baml_client/client" + +$b = Baml.Client + +def main + weather_info = $b.UseTool(user_message: "What's the weather like in San Francisco?") + puts weather_info + raise unless weather_info.is_a?(Baml::Types::WeatherAPI) + puts "City: #{weather_info.city}" + puts "Time of Day: #{weather_info.timeOfDay}" +end +``` + + +## Choosing multiple Tools + +To choose ONE tool out of many, you can use a union: +```baml BAML +function UseTool(user_message: string) -> WeatherAPI | MyOtherAPI { + .... // same thing +} +``` + +If you use [VSCode Playground](/guides/installation-editors/vs-code-extension), you can see what we inject into the prompt, with full transparency. + +Call the function like this: + + +```python Python +import asyncio +from baml_client import b +from baml_client.types import WeatherAPI, MyOtherAPI + +async def main(): + tool = b.UseTool("What's the weather like in San Francisco?") + print(tool) + + if isinstance(tool, WeatherAPI): + print(f"Weather API called:") + print(f"City: {tool.city}") + print(f"Time of Day: {tool.timeOfDay}") + elif isinstance(tool, MyOtherAPI): + print(f"MyOtherAPI called:") + # Handle MyOtherAPI specific attributes here + +if __name__ == '__main__': + main() +``` + +```typescript TypeScript +import { b } from './baml_client' +import { WeatherAPI, MyOtherAPI } from './baml_client/types' + +const main = async () => { + const tool = await b.UseTool("What's the weather like in San Francisco?") + console.log(tool) + + if (tool instanceof WeatherAPI) { + console.log("Weather API called:") + console.log(`City: ${tool.city}`) + console.log(`Time of Day: ${tool.timeOfDay}`) + } else if (tool instanceof MyOtherAPI) { + console.log("MyOtherAPI called:") + // Handle MyOtherAPI specific attributes here + } +} + +main() +``` + +```ruby Ruby +require_relative "baml_client/client" + +$b = Baml.Client + +def main + tool = $b.UseTool(user_message: "What's the weather like in San Francisco?") + puts tool + + case tool + when Baml::Types::WeatherAPI + puts "Weather API called:" + puts "City: #{tool.city}" + puts "Time of Day: #{tool.timeOfDay}" + when Baml::Types::MyOtherAPI + puts "MyOtherAPI called:" + # Handle MyOtherAPI specific attributes here + end +end + +main +``` + + +## Choosing N Tools +To choose many tools, you can use a union of a list: +```baml BAML +function UseTool(user_message: string) -> (WeatherAPI | MyOtherAPI)[] { + .... // same thing +} +``` + +Call the function like this: + + +```python Python +import asyncio +from baml_client import b +from baml_client.types import WeatherAPI, MyOtherAPI + +async def main(): + tools = b.UseTool("What's the weather like in San Francisco and New York?") + print(tools) + + for tool in tools: + if isinstance(tool, WeatherAPI): + print(f"Weather API called:") + print(f"City: {tool.city}") + print(f"Time of Day: {tool.timeOfDay}") + elif isinstance(tool, MyOtherAPI): + print(f"MyOtherAPI called:") + # Handle MyOtherAPI specific attributes here + +if __name__ == '__main__': + main() +``` + +```typescript TypeScript +import { b } from './baml_client' +import { WeatherAPI, MyOtherAPI } from './baml_client/types' + +const main = async () => { + const tools = await b.UseTool("What's the weather like in San Francisco and New York?") + console.log(tools) + + tools.forEach(tool => { + if (tool instanceof WeatherAPI) { + console.log("Weather API called:") + console.log(`City: ${tool.city}`) + console.log(`Time of Day: ${tool.timeOfDay}`) + } else if (tool instanceof MyOtherAPI) { + console.log("MyOtherAPI called:") + // Handle MyOtherAPI specific attributes here + } + }) +} + +main() +``` + +```ruby Ruby +require_relative "baml_client/client" + +$b = Baml.Client + +def main + tools = $b.UseTool(user_message: "What's the weather like in San Francisco and New York?") + puts tools + + tools.each do |tool| + case tool + when Baml::Types::WeatherAPI + puts "Weather API called:" + puts "City: #{tool.city}" + puts "Time of Day: #{tool.timeOfDay}" + when Baml::Types::MyOtherAPI + puts "MyOtherAPI called:" + # Handle MyOtherAPI specific attributes here + end + end +end + +main +``` + + +## Function-calling APIs vs Prompting +Injecting your function schemas into the prompt, as BAML does, outperforms function-calling across all benchmarks for major providers ([see our Berkeley FC Benchmark results with BAML](https://www.boundaryml.com/blog/sota-function-calling?q=0)). + +Amongst other limitations, function-calling APIs will at times: +1. Return a schema when you don't want any (you want an error) +2. Not work for tools with more than 100 parameters. +3. Use [many more tokens than prompting](https://www.boundaryml.com/blog/type-definition-prompting-baml). + +Keep in mind that "JSON mode" is nearly the same thing as "prompting", but it enforces the LLM response is ONLY a JSON blob. +BAML does not use JSON mode since it allows developers to use better prompting techniques like chain-of-thought, to allow the LLM to express its reasoning before printing out the actual schema. BAML's parser can find the json schema(s) out of free-form text for you. Read more about different approaches to structured generation [here](https://www.boundaryml.com/blog/schema-aligned-parsing) + +BAML will still support native function-calling APIs in the future (please let us know more about your use-case so we can prioritize accordingly) diff --git a/fern/01-guide/07-observability/studio.mdx b/fern/01-guide/07-observability/studio.mdx new file mode 100644 index 000000000..dfc4bd30f --- /dev/null +++ b/fern/01-guide/07-observability/studio.mdx @@ -0,0 +1,142 @@ +--- +title: Boundary Studio +--- + + +For the remaining of 2024, Boundary Studio is free for new accounts! + +Boundary Studio 2 will be released in 2025 with a new pricing model. + + +To enable observability with BAML, you'll first need to sign up for a [Boundary Studio](https://app.boundaryml.com) account. + +Once you've signed up, you'll be able to create a new project and get your project token. + +Then simply add the following environment variables prior to running your application: + +```bash +export BOUNDARY_PROJECT_ID=project_uuid +export BOUNDARY_SECRET=your_token +``` + +There you'll be able to see all the metrics and logs from your application including: + +- Cost +- Function calls +- Execution time +- Token Usage +- Prompt Logs +- and more... + +## Tracing Custom Events + + +BAML allows you to trace any function with the **@trace** decorator. +This will make the function's input and output show up in the Boundary dashboard. This works for any python function you define yourself. BAML LLM functions (or any other function declared in a .baml file) are already traced by default. Logs are only sent to the Dashboard if you setup your environment variables correctly. + +### Example + +In the example below, we trace each of the two functions `pre_process_text` and `full_analysis`: + + +```python Python +from baml_client import baml +from baml_client.types import Book, AuthorInfo +from baml_client.tracing import trace + +# You can also add a custom name with trace(name="my_custom_name") +# By default, we use the function's name. +@trace +def pre_process_text(text): + return text.replace("\n", " ") + + +@trace +async def full_analysis(book: Book): + sentiment = await baml.ClassifySentiment( + pre_process_text(book.content) + ) + book_analysis = await baml.AnalyzeBook(book) + return book_analysis + + +@trace +async def test_book1(): + content = """Before I could reply that he [Gatsby] was my neighbor... + """ + processed_content = pre_process_text(content) + return await full_analysis( + Book( + title="The Great Gatsby", + author=AuthorInfo(firstName="F. Scott", lastName="Fitzgerald"), + content=processed_content, + ), + ) +``` + +```typescript TypeScript +import { baml } from 'baml_client'; +import { Book, AuthorInfo } from 'baml_client/types'; +import { traceSync, traceAsync } from 'baml_client/tracing'; + +const preProcessText = traceSync(function(text: string): Promise { + return text.replace(/\n/g, " "); +}); + +const fullAnalysis = traceAsync(async function(book: Book): Promise { + const sentiment = await baml.ClassifySentiment( + preProcessText(book.content) + ); + const bookAnalysis = await baml.AnalyzeBook(book); + return bookAnalysis; +}); + +const testBook1 = traceAsync(async function(): Promise { + const content = `Before I could reply that he [Gatsby] was my neighbor...`; + const processedContent = preProcessText(content); + return await fullAnalysis( + new Book( + "The Great Gatsby", + new AuthorInfo("F. Scott", "Fitzgerald"), + processedContent + ) + ); +}); +``` + +```text Ruby +Tracing non-baml functions is not yet supported in Ruby. +``` + + +```text REST (OpenAPI) +Tracing non-baml functions is not yet supported in REST (OpenAPI). +``` + + + +This allows us to see each function invocation, as well as all its children in the dashboard: + + + +See [running tests](/running-tests) for more information on how to run this test. + +### Adding custom tags + +The dashboard view allows you to see custom tags for each of the function calls. This is useful for adding metadata to your traces and allow you to query your generated logs more easily. + +To add a custom tag, you can import **set_tags(..)** as below: + +```python +from baml_client.tracing import set_tags, trace +import typing + +@trace +async def pre_process_text(text): + set_tags(userId="1234") + + # You can also create a dictionary and pass it in + tags_dict: typing.Dict[str, str] = {"userId": "1234"} + set_tags(**tags_dict) # "**" unpacks the dictionary + return text.replace("\n", " ") +``` diff --git a/fern/01-guide/08-integrations/nextjs.mdx b/fern/01-guide/08-integrations/nextjs.mdx new file mode 100644 index 000000000..f035aa193 --- /dev/null +++ b/fern/01-guide/08-integrations/nextjs.mdx @@ -0,0 +1,230 @@ +--- +title: Next.js Integration +slug: docs/baml-nextjs/baml-nextjs +--- + +BAML can be used with Vercel's AI SDK to stream BAML functions to your UI. + +The latest example code is found in our [NextJS starter](https://github.com/BoundaryML/baml-examples/tree/main/nextjs-starter), but this tutorial will guide you on how to add BAML step-by-step. + +See the [live demo](https://baml-examples.vercel.app/) + + +You will need to use Server Actions, from the App Router, for this tutorial. You can still stream BAML functions from Route Handlers however. + + + + +### Install BAML, and Generate a BAML client for TypeScript +- Follow [the TS installation guide](/docs/get-started/quickstart/typescript) +- Install the VSCode extension and Save a baml file to generate the client (or use `npx baml-cli generate`). + + +### Create some helper utilities to stream BAML functions +Let's add some helpers to export our baml functions as streamable server actions. See the last line in this file, where we export the `extractResume` function. + +In `app/utils/streamableObject.tsx` add the following code: +```typescript +import { createStreamableValue, StreamableValue as BaseStreamableValue } from "ai/rsc"; +import { BamlStream } from "@boundaryml/baml"; +import { b } from "@/baml_client"; // You can change the path of this to wherever your baml_client is located. + + +// ------------------------------ +// Helper functions +// ------------------------------ + +/** + * Type alias for defining a StreamableValue based on a BamlStream. + * It captures either a partial or final result depending on the stream state. + */ +type StreamableValue> = + | { partial: T extends BamlStream ? StreamRet : never } + | { final: T extends BamlStream ? Ret : never }; + +/** + * Helper function to manage and handle a BamlStream. + * It consumes the stream, updates the streamable value for each partial event, + * and finalizes the stream when complete. + * + * @param bamlStream - The BamlStream to be processed. + * @returns A promise that resolves with an object containing the BaseStreamableValue. + */ +export async function streamHelper>( + bamlStream: T, +): Promise<{ + object: BaseStreamableValue>; +}> { + const stream = createStreamableValue>(); + + // Asynchronous function to process the BamlStream events + (async () => { + try { + // Iterate through the stream and update the stream value with partial data + for await (const event of bamlStream) { + stream.update({ partial: event }); + } + + // Obtain the final response once all events are processed + const response = await bamlStream.getFinalResponse(); + stream.done({ final: response }); + } catch (err) { + // Handle any errors during stream processing + stream.error(err); + } + })(); + + return { object: stream.value }; +} + +/** + * Utility function to create a streamable function from a BamlStream-producing function. + * This function returns an asynchronous function that manages the streaming process. + * + * @param func - A function that produces a BamlStream when called. + * @returns An asynchronous function that returns a BaseStreamableValue for the stream. + */ +export function makeStreamable< + BamlStreamFunc extends (...args: any) => BamlStream, +>( + func: BamlStreamFunc +): (...args: Parameters) => Promise<{ + object: BaseStreamableValue>>; +}> { + return async (...args) => { + const boundFunc = func.bind(b.stream); + const stream = boundFunc(...args); + return streamHelper(stream); + }; +} + +``` + +### Export your BAML functions to streamable server actions + +In `app/actions/extract.tsx` add the following code: +```typescript +import { makeStreamable } from "../_baml_utils/streamableObjects"; + + +export const extractResume = makeStreamable(b.stream.ExtractResume); +``` + +### Create a hook to use the streamable functions in React Components +This hook will work like [react-query](https://react-query.tanstack.com/), but for BAML functions. +It will give you partial data, the loading status, and whether the stream was completed. + +In `app/_hooks/useStream.ts` add: +```typescript +import { useState, useEffect } from "react"; +import { readStreamableValue, StreamableValue } from "ai/rsc"; + +/** + * A hook that streams data from a server action. The server action must return a StreamableValue. + * See the example actiimport { useState, useEffect } from "react"; +import { readStreamableValue, StreamableValue } from "ai/rsc"; + +/** + * A hook that streams data from a server action. The server action must return a StreamableValue. + * See the example action in app/actions/streamable_objects.tsx + * **/ +export function useStream( + serverAction: (...args: P) => Promise<{ object: StreamableValue<{ partial: PartialRet } | { final: Ret }, any> }> +) { + const [isLoading, setIsLoading] = useState(false); + const [isComplete, setIsComplete] = useState(false); + const [isError, setIsError] = useState(false); + const [error, setError] = useState(null); + const [partialData, setPartialData] = useState(undefined); // Initialize data state + const [streamResult, setData] = useState(undefined); // full non-partial data + + const mutate = async ( + ...params: Parameters + ): Promise => { + console.log("mutate", params); + setIsLoading(true); + setIsError(false); + setError(null); + + try { + const { object } = await serverAction(...params); + const asyncIterable = readStreamableValue(object); + + for await (const value of asyncIterable) { + if (value !== undefined) { + + // could also add a callback here. + // if (options?.onData) { + // options.onData(value as T); + // } + console.log("value", value); + if ("partial" in value) { + setPartialData(value.partial); // Update data state with the latest value + } else if ("final" in value) { + setData(value.final); // Update data state with the latest value + setIsComplete(true); + return value.final; + } + } + } + + // // If it completes, it means it's the full data. + // return streamedData; + } catch (err) { + console.log("error", err); + + setIsError(true); + setError(new Error(JSON.stringify(err) ?? "An error occurred")); + return undefined; + } finally { + setIsLoading(false); + } + }; + + // If you use the "data" property, your component will re-render when the data gets updated. + return { data: streamResult, partialData, isLoading, isComplete, isError, error, mutate }; +} + +``` + + + +### Stream your BAML function in a component +In `app/page.tsx` you can use the hook to stream the BAML function and render the result in real-time. + +```tsx +"use client"; +import { + extractResume, + extractUnstructuredResume, +} from "../../actions/streamable_objects"; +// import types from baml files like this: +import { Resume } from "@/baml_client"; + +export default function Home() { + // you can also rename these fields by using ":", like how we renamed partialData to "partialResume" + // `mutate` is a function that will start the stream. It takes in the same arguments as the BAML function. + const { data: completedData, partialData: partialResume, isLoading, isError, error, mutate } = useStream(extractResume); + + return ( +
+

BoundaryML Next.js Example

+ + + {isLoading &&

Loading...

} + {isError &&

Error: {error?.message}

} + {partialData &&
{JSON.stringify(partialData, null, 2)}
} + {data &&
{JSON.stringify(data, null, 2)}
} +
+ ); +} +``` + +
+ + +And now you're all set! + +If you have issues with your environment variables not loading, you may want to use [dotenv-cli](https://www.npmjs.com/package/dotenv-cli) to load your env vars before the nextjs process starts: + +`dotenv -- npm run dev` \ No newline at end of file diff --git a/fern/01-guide/09-comparisons/langchain.mdx b/fern/01-guide/09-comparisons/langchain.mdx new file mode 100644 index 000000000..6d14055ba --- /dev/null +++ b/fern/01-guide/09-comparisons/langchain.mdx @@ -0,0 +1,10 @@ +--- +title: Comparing Langchain +slug: docs/comparisons/langchain +--- + + +[Langchain](https://langchain.com) is a toolkit that helps developers build AI applications. + +### The LCEL + diff --git a/fern/01-guide/09-comparisons/marvin.mdx b/fern/01-guide/09-comparisons/marvin.mdx new file mode 100644 index 000000000..de323c988 --- /dev/null +++ b/fern/01-guide/09-comparisons/marvin.mdx @@ -0,0 +1,126 @@ +--- +title: Comparing Marvin +--- + + +[Marvin](https://github.com/PrefectHQ/marvin) lets developers do extraction or classification tasks in Python as shown below (TypeScript is not supported): + + +```python +import pydantic + +class Location(pydantic.BaseModel): + city: str + state: str + +marvin.extract("I moved from NY to CHI", target=Location) +``` + +You can also provide instructions: +```python +marvin.extract( + "I paid $10 for 3 tacos and got a dollar and 25 cents back.", + target=float, + instructions="Only extract money" +) + +# [10.0, 1.25] +``` +or using enums to classify +```python +from enum import Enum +import marvin + +class RequestType(Enum): + SUPPORT = "support request" + ACCOUNT = "account issue" + INQUIRY = "general inquiry" + +request = marvin.classify("Reset my password", RequestType) +assert request == RequestType.ACCOUNT +``` + + +For enum classification, you can add more instructions to each enum, but then you don't get fully typed outputs, nor can reuse the enum in your own code. You're back to working with raw strings. + +```python +# Classifying a task based on project specifications +project_specs = { + "Frontend": "Tasks involving UI design, CSS, and JavaScript.", + "Backend": "Tasks related to server, database, and application logic.", + "DevOps": "Tasks involving deployment, CI/CD, and server maintenance." +} + +task_description = "Set up the server for the new application." + +task_category = marvin.classify( + task_description, + labels=list(project_specs.keys()), + instructions="Match the task to the project category based on the provided specifications." +) +assert task_category == "Backend" +``` + +Marvin has some inherent limitations for example: +1. How to use a different model? +2. What is the full prompt? Where does it live? What if I want to change it because it doesn't work well for my use-case? How many tokens is it? +3. How do I test this function? +4. How do I visualize results over time in production? + + +### Using BAML +Here is the BAML equivalent of this classification task based off the prompt Marvin uses under-the-hood. Note how the prompt becomes transparent to you using BAML. You can easily make it more complex or simpler depending on the model. + +```baml +enum RequestType { + SUPPORT @alias("support request") + ACCOUNT @alias("account issue") @description("A detailed description") + INQUIRY @alias("general inquiry") +} + +function ClassifyRequest(input: string) -> RequestType { + client GPT4 // choose even open source models + prompt #" + You are an expert classifier that always maintains as much semantic meaning + as possible when labeling text. Classify the provided data, + text, or information as one of the provided labels: + + TEXT: + --- + {{ input }} + --- + + {{ ctx.output_format }} + + The best label for the text is: + "# +} +``` +And you can call this function in your code +```python +from baml_client import baml as b + +... +requestType = await b.ClassifyRequest("Reset my password") +# fully typed output +assert requestType == RequestType.ACCOUNT +``` + +The prompt string may be more wordy, but with BAML you now have +1. Fully typed responses, guaranteed +1. Full transparency and flexibility of the prompt string +1. Full freedom for what model to use +1. Helper functions to manipulate types in prompts (print_enum) +1. Testing capabilities using the VSCode playground +1. Analytics in the Boundary Dashboard +1. Support for TypeScript +1. A better understanding of how prompt engineering works + + +Marvin was a big source of inspiration for us -- their approach is simple and elegant. We recommend checking out Marvin if you're just starting out with prompt engineering or want to do a one-off simple task in Python. But if you'd like a whole added set of features, we'd love for you to give BAML a try and let us know what you think. + +### Limitations of BAML + +BAML does have some limitations we are continuously working on. Here are a few of them: +1. It is a new language. However, it is fully open source and getting started takes less than 10 minutes. We are on-call 24/7 to help with any issues (and even provide prompt engineering tips) +1. Developing requires VSCode. You _could_ use vim and we have workarounds but we don't recommend it. diff --git a/fern/01-guide/09-comparisons/pydantic.mdx b/fern/01-guide/09-comparisons/pydantic.mdx new file mode 100644 index 000000000..1830ac004 --- /dev/null +++ b/fern/01-guide/09-comparisons/pydantic.mdx @@ -0,0 +1,409 @@ +--- +title: Comparing Pydantic +--- + +Pydantic is a popular library for data validation in Python used by most -- if not all -- LLM frameworks, like [instructor](https://github.com/jxnl/instructor/tree/main). + +BAML also uses Pydantic. The BAML Rust compiler can generate Pydantic models from your `.baml` files. But that's not all the compiler does -- it also takes care of fixing common LLM parsing issues, supports more data types, handles retries, and reduces the amount of boilerplate code you have to write. + +Let's dive into how Pydantic is used and its limitations. + +### Why working with LLMs requires more than just Pydantic + +Pydantic can help you get structured output from an LLM easily at first glance: +```python +class Resume(BaseModel): + name: str + skills: List[str] + +def create_prompt(input_text: str) -> str: + PROMPT_TEMPLATE = f"""Parse the following resume and return a structured representation of the data in the schema below. +Resume: +--- +{input_text} +--- + +Schema: +{Resume.model_json_schema()['properties']} + +Output JSON: +""" + return PROMPT_TEMPLATE + +def extract_resume(input_text: str) -> Union[Resume, None]: + prompt = create_prompt(input_text) + chat_completion = client.chat.completions.create( + model="gpt-4", messages=[{"role": "system", "content": prompt}] + ) + try: + output = chat_completion.choices[0].message.content + if output: + return Resume.model_validate_json(output) + return None + except Exception as e: + raise e +``` + +That's pretty good, but now we want to add an `Education` model to the `Resume` model. We add the following code: + +```diff +... ++class Education(BaseModel): ++ school: str ++ degree: str ++ year: int + +class Resume(BaseModel): + name: str + skills: List[str] ++ education: List[Education] + +def create_prompt(input_text: str) -> str: + additional_models = "" ++ if "$defs" in Resume.model_json_schema(): ++ additional_models += f"\nUse these other schema definitions as +well:\n{Resume.model_json_schema()['$defs']}" + PROMPT_TEMPLATE = f"""Parse the following resume and return a structured representation of the data in the schema below. +Resume: +--- +{input_text} +--- + +Schema: +{Resume.model_json_schema()['properties']} + ++ {additional_models} + +Output JSON: +""".strip() + return PROMPT_TEMPLATE +... +``` +A little ugly, but still readable... But managing all these prompt strings can make your codebase disorganized very quickly. + +Then you realize the LLM sometimes outputs some text before giving you the json, like this: + +```diff ++ The output is: +{ + "name": "John Doe", + ... // truncated for brevity +} +``` + +So you add a regex to address that that extracts everything in `{}`: + +```diff +def extract_resume(input_text: str) -> Union[Resume, None]: + prompt = create_prompt(input_text) + print(prompt) + chat_completion = client.chat.completions.create( + model="gpt-4", messages=[{"role": "system", "content": prompt}] + ) + try: + output = chat_completion.choices[0].message.content + print(output) + if output: ++ # Extract JSON block using regex ++ json_match = re.search(r"\{.*?\}", output, re.DOTALL) ++ if json_match: ++ json_output = json_match.group(0) + return Resume.model_validate_json(output) + return None + except Exception as e: + raise e +``` + +Next you realize you actually want an array of `Resumes`, but you can't really use `List[Resume]` because Pydantic and Python don't work this way, so you have to add another wrapper: + +```diff ++class ResumeArray(BaseModel): ++ resumes: List[Resume] +``` +Now you need to change the rest of your code to handle different models. That's good longterm, but it is now more boilerplate you have to write, test and maintain. + +Next, you notice the LLM sometimes outputs a single resume `{...}`, and sometimes an array `[{...}]`... +You must now change your parser to handle both cases: + +```diff ++def extract_resume(input_text: str) -> Union[List[Resume], None]: ++ prompt = create_prompt(input_text) # Also requires changes + chat_completion = client.chat.completions.create( + model="gpt-4", messages=[{"role": "system", "content": prompt}] + ) + try: + output = chat_completion.choices[0].message.content + if output: + # Extract JSON block using regex + json_match = re.search(r"\{.*?\}", output, re.DOTALL) + if json_match: + json_output = json_match.group(0) + try: ++ parsed = json.loads(json_output) ++ if isinstance(parsed, list): ++ return list(map(Resume.model_validate_json, parsed)) ++ else: ++ return [ResumeArray(**parsed)] + return None + except Exception as e: + raise e +``` +You could retry the call against the LLM to fix the issue, but that will cost you precious seconds and tokens, so handling this corner case manually is the only solution. + + + + + +--- +## A small tangent -- JSON schemas vs type definitions +Sidenote: At this point your prompt looks like this: + +``` +JSON Schema: +{'name': {'title': 'Name', 'type': 'string'}, 'skills': {'items': {'type': 'string'}, 'title': 'Skills', 'type': 'array'}, 'education': {'anyOf': [{'$ref': '#/$defs/Education'}, {'type': 'null'}]}} + + +Use these other JSON schema definitions as well: +{'Education': {'properties': {'degree': {'title': 'Degree', 'type': 'string'}, 'major': {'title': 'Major', 'type': 'string'}, 'school': {'title': 'School', 'type': 'string'}, 'year': {'title': 'Year', 'type': 'integer'}}, 'required': ['degree', 'major', 'school', 'year'], 'title': 'Education', 'type': 'object'}} +``` + +and sometimes even GPT-4 outputs incorrect stuff like this, even though it's technically correct JSON (OpenAI's "JSON mode" will still break you) +``` +{ + "name": + { + "title": "Name", + "type": "string", + "value": "John Doe" + }, + "skills": + { + "items": + { + "type": "string", + "values": + [ + "Python", + "JavaScript", + "React" + ] + ... // truncated for brevity +``` +(this is an actual result from GPT-4 before some more prompt engineering) + +when all you really want is a prompt that looks like the one below -- with way less tokens (and less likelihood of confusion). : +```diff +Parse the following resume and return a structured representation of the data in the schema below. +Resume: +--- +John Doe +Python, Rust +University of California, Berkeley, B.S. in Computer Science, 2020 +--- + ++JSON Schema: ++{ ++ "name": string, ++ "skills": string[] ++ "education": { ++ "school": string, ++ "degree": string, ++ "year": integer ++ }[] ++} + +Output JSON: +``` +Ahh, much better. **That's 80% less tokens** with a simpler prompt, for the same results. (See also Microsoft's [TypeChat](https://microsoft.github.io/TypeChat/docs/introduction/) which uses a similar schema format using typescript types) + +--- +But we digress, let's get back to the point. You can see how this can get out of hand quickly, and how Pydantic wasn't really made with LLMs in mind. We haven't gotten around to adding resilience like **retries, or falling back to a different model in the event of an outage**. There's still a lot of wrapper code to write. + +### Pydantic and Enums +There are other core limitations. +Say you want to do a classification task using Pydantic. An Enum is a great fit for modelling this. + +Assume this is our prompt: +```text +Classify the company described in this text into the best +of the following categories: + +Text: +--- +{some_text} +--- + +Categories: +- Technology: Companies involved in the development and production of technology products or services +- Healthcare: Includes companies in pharmaceuticals, biotechnology, medical devices. +- Real estate: Includes real estate investment trusts (REITs) and companies involved in real estate development. + +The best category is: +``` + +Since we have descriptions, we need to generate a custom enum we can use to build the prompt: + +```python +class FinancialCategory(Enum): + technology = ( + "Technology", + "Companies involved in the development and production of technology products or services.", + ) + ... + real_estate = ( + "Real Estate", + "Includes real estate investment trusts (REITs) and companies involved in real estate development.", + ) + + def __init__(self, category, description): + self._category = category + self._description = description + + @property + def category(self): + return self._category + + @property + def description(self): + return self._description + +``` +We add a class method to load the right enum from the LLM output string: +```python + @classmethod + def from_string(cls, category: str) -> "FinancialCategory": + for c in cls: + if c.category == category: + return c + raise ValueError(f"Invalid category: {category}") +``` +Update the prompt to use the enum descriptions: +```python +def print_categories_and_descriptions(): + for category in FinancialCategory: + print(f"{category.category}: {category.description}") + +def create_prompt(text: str) -> str: + additional_models = "" + print_categories_and_descriptions() + PROMPT_TEMPLATE = f"""Classify the company described in this text into the best +of the following categories: + +Text: +--- +{text} +--- + +Categories: +{print_categories_and_descriptions()} + +The best category is: +""" + return PROMPT_TEMPLATE +``` +And then we use it in our AI function: +```python +def classify_company(text: str) -> FinancialCategory: + prompt = create_prompt(text) + chat_completion = client.chat.completions.create( + model="gpt-4", messages=[{"role": "system", "content": prompt}] + ) + try: + output = chat_completion.choices[0].message.content + if output: + # Use our helper function! + return FinancialCategory.from_string(output) + return None + except Exception as e: + raise e +``` + +What gets hairy is if you want to change your types. +- What if you want the LLM to return an object instead? You have to change your enum, your prompt, AND your parser. +- What if you want to handle cases where the LLM outputs "Real Estate" or "real estate"? +- What if you want to save the enum information in a database? `str(category)` will save `FinancialCategory.healthcare` into your DB, but your parser only recognizes "Healthcare", so you'll need more boilerplate if you ever want to programmatically analyze your data. + + +### Alternatives +There are libraries like [instructor](https://github.com/jxnl/instructor/tree/main) do provide a great amount of boilerplate but you're still: + +1. Using prompts that you cannot control. E.g. [a commit may change your results underneath you](https://github.com/jxnl/instructor/commit/1b6d8253c0f7dfdaa6cb1dbdbd37684d192ddecf). +1. Using more tokens than you may need to to declare schemas (higher costs and latencies) +1. **There are no included testing capabilities.**. Developers have to copy-paste JSON blobs everywhere, potentially between their IDEs and other websites. Existing LLM Playgrounds were not made with structured data in mind. +1. Lack of observability. No automatic tracing of requests. + +## Enter BAML +The Boundary toolkit helps you iterate seamlessly compared to Pydantic. + +Here's all the BAML code you need to solve the Extract Resume problem from earlier (VSCode prompt preview is shown on the right): + + + + +Here we use a "GPT4" client, but you can use any model. See [client docs](/docs/syntax/client/client) + +{/* +```baml + + +class Education { + school string + degree string + year int +} + +class Resume { + name string + skills string[] + education Education[] +} + +function ExtractResume(resume_text: string) -> Resume { + client GPT4 + prompt #" + Parse the following resume and return a structured representation of the data in the schema below. + + Resume: + --- + {{ input.resume_text }} + --- + + Output in this JSON format: + {{ ctx.output_format }} + + Output JSON: + "# +} +``` */} +The BAML compiler generates a python client that imports and calls the function: +```python +from baml_client import baml as b + +async def main(): + resume = await b.ExtractResume(resume_text="""John Doe +Python, Rust +University of California, Berkeley, B.S. in Computer Science, 2020""") + + assert resume.name == "John Doe" +``` +That's it! No need to write any more code. Since the compiler knows what your function signature is we literally generate a custom deserializer for your own unique usecase that _just works_. + + + + +Converting the `Resume` into an array of resumes requires a single line change in BAML (vs having to create array wrapper classes and parsing logic). + +In this image we change the types and BAML automatically updates the prompt, parser, and the Python types you get back. + + + + + +Adding retries or resilience requires just [a couple of modifications](/docs/syntax/client/retry). And best of all, **you can test things instantly, without leaving your VSCode**. + +### Conclusion +We built BAML because writing a Python library was just not powerful enough to do everything we envisioned, as we have just explored. + +Check out the [Hello World](/docs/guides/hello_world/level0) tutorial to get started. + +Our mission is to make the best DX for AI engineers working with LLMs. Contact us at founders@boundaryml.com or [Join us on Discord](https://discord.gg/BTNBeXGuaS) to stay in touch with the community and influence the roadmap. + diff --git a/fern/01-guide/contact.mdx b/fern/01-guide/contact.mdx new file mode 100644 index 000000000..d26fe4134 --- /dev/null +++ b/fern/01-guide/contact.mdx @@ -0,0 +1,6 @@ + +We have seen many different prompts for many use-cases. We'd love to hear about your prompt and how you use BAML. + +Contact Us at [contact@boundaryml.com](mailto:contact@boundaryml.com) + +or join our [Discord](https://discord.gg/BTNBeXGuaS) \ No newline at end of file diff --git a/fern/01-guide/introduction.mdx b/fern/01-guide/introduction.mdx new file mode 100644 index 000000000..e69de29bb diff --git a/fern/01-guide/what-are-function-definitions.mdx b/fern/01-guide/what-are-function-definitions.mdx new file mode 100644 index 000000000..d2007d980 --- /dev/null +++ b/fern/01-guide/what-are-function-definitions.mdx @@ -0,0 +1,96 @@ +--- +title: What is BAML? +--- + +The best way to understand BAML and its developer experience is to see it live in a demo (see below). + +### Demo video +Here we write a BAML function definition, and then call it from a Python script. + + + + +### Examples +- [Interactive NextJS app with streaming](https://baml-examples.vercel.app/examples/stream-object) +- [Starter boilerplates for Python, Typescript, Ruby, etc.](https://github.com/boundaryml/baml-examples) + +### High-level Developer Flow + + +### Write a BAML function definition +```baml main.baml +class WeatherAPI { + city string @description("the user's city") + timeOfDay string @description("As an ISO8601 timestamp") +} + +function UseTool(user_message: string) -> WeatherAPI { + client "openai/gpt-4o" + prompt #" + Extract.... {# we will explain the rest in the guides #} + "# +} +``` +Here you can run tests in the VSCode Playground. + +### Generate `baml_client` from those .baml files. +This is auto-generated code with all boilerplate to call the LLM endpoint, parse the output, fix broken JSON, and handle errors. + + + +### Call your function in any language +with type-safety, autocomplete, retry-logic, robust JSON parsing, etc.. + +```python Python +import asyncio +from baml_client import b +from baml_client.types import WeatherAPI + +def main(): + weather_info = b.UseTool("What's the weather like in San Francisco?") + print(weather_info) + assert isinstance(weather_info, WeatherAPI) + print(f"City: {weather_info.city}") + print(f"Time of Day: {weather_info.timeOfDay}") + +if __name__ == '__main__': + main() +``` + +```typescript TypeScript +import { b } from './baml_client' +import { WeatherAPI } from './baml_client/types' +import assert from 'assert' + +const main = async () => { + const weatherInfo = await b.UseTool("What's the weather like in San Francisco?") + console.log(weatherInfo) + assert(weatherInfo instanceof WeatherAPI) + console.log(`City: ${weatherInfo.city}`) + console.log(`Time of Day: ${weatherInfo.timeOfDay}`) +} +``` + +```ruby Ruby +require_relative "baml_client/client" + +$b = Baml.Client + +def main + weather_info = $b.UseTool(user_message: "What's the weather like in San Francisco?") + puts weather_info + raise unless weather_info.is_a?(Baml::Types::WeatherAPI) + puts "City: #{weather_info.city}" + puts "Time of Day: #{weather_info.timeOfDay}" +end +``` + +```python Other Languages +# read the installation guide for other languages! +``` + + + +Continue on to the [Installation Guides](/guide/installation-language) for your language to setup BAML in a few minutes! + +You don't need to migrate 100% of your LLM code to BAML in one go! It works along-side any existing LLM framework. \ No newline at end of file diff --git a/fern/01-guide/what-is-baml_client.mdx b/fern/01-guide/what-is-baml_client.mdx new file mode 100644 index 000000000..bdd9dd1cd --- /dev/null +++ b/fern/01-guide/what-is-baml_client.mdx @@ -0,0 +1,100 @@ +--- +title: What is baml_client? +--- + +**baml_client** is the code that gets generated from your BAML files that transforms your BAML prompts into the same equivalent function in your language, with validated type-safe outputs. + + +```python Python +from baml_client import b +resume_info = b.ExtractResume("....some text...") +``` + +This has all the boilerplate to: +1. call the LLM endpoint with the right parameters, +2. parse the output, +3. fix broken JSON (if any) +4. return the result in a nice typed object. +5. handle errors + +In Python, your BAML types get converted to Pydantic models. In Typescript, they get converted to TypeScript types, and so on. **BAML acts like a universal type system that can be used in any language**. + + + +### Generating baml_client + + Refer to the **[Installation](/guide/installation-language/python)** guides for how to set this up for your language, and how to generate it. + + But at a high-level, you just include a [generator block](/ref/baml/generator) in any of your BAML files. + + + +```baml Python +generator target { + // Valid values: "python/pydantic", "typescript", "ruby/sorbet" + output_type "python/pydantic" + + // Where the generated code will be saved (relative to baml_src/) + output_dir "../" + + // What interface you prefer to use for the generated code (sync/async) + // Both are generated regardless of the choice, just modifies what is exported + // at the top level + default_client_mode "sync" + + // Version of runtime to generate code for (should match installed baml-py version) + version "0.54.0" +} +``` + +```baml TypeScript +generator target { + // Valid values: "python/pydantic", "typescript", "ruby/sorbet" + output_type "typescript" + + // Where the generated code will be saved (relative to baml_src/) + output_dir "../" + + // What interface you prefer to use for the generated code (sync/async) + // Both are generated regardless of the choice, just modifies what is exported + // at the top level + default_client_mode "async" + + // Version of runtime to generate code for (should match the package @boundaryml/baml version) + version "0.54.0" +} +``` + +```baml Ruby (beta) +generator target { + // Valid values: "python/pydantic", "typescript", "ruby/sorbet" + output_type "ruby/sorbet" + + // Where the generated code will be saved (relative to baml_src/) + output_dir "../" + + // Version of runtime to generate code for (should match installed `baml` package version) + version "0.54.0" +} +``` + +```baml OpenAPI +generator target { + // Valid values: "python/pydantic", "typescript", "ruby/sorbet", "rest/openapi" + output_type "rest/openapi" + + // Where the generated code will be saved (relative to baml_src/) + output_dir "../" + + // Version of runtime to generate code for (should match installed `baml` package version) + version "0.54.0" + + // 'baml-cli generate' will run this after generating openapi.yaml, to generate your OpenAPI client + // This command will be run from within $output_dir + on_generate "npx @openapitools/openapi-generator-cli generate -i openapi.yaml -g OPENAPI_CLIENT_TYPE -o ." +} +``` + + +The `baml_client` transforms a BAML function into the same equivalent function in your language, + diff --git a/fern/01-guide/what-is-baml_src.mdx b/fern/01-guide/what-is-baml_src.mdx new file mode 100644 index 000000000..742fadf54 --- /dev/null +++ b/fern/01-guide/what-is-baml_src.mdx @@ -0,0 +1,16 @@ +--- +title: What is baml_src? +--- + +**baml_src** is where you keep all your BAML files, and where all the prompt-related code lives. It must be named `baml_src` for our tooling to pick it up, but it can live wherever you want. + +It helps keep your project organized, and makes it easy to separate prompt engineering from the rest of your code. + + + + +Some things to note: +1. All declarations within this directory are accessible across all files contained in the `baml_src` folder. +2. You can have multiple files, and even nest subdirectories. + +You don't need to worry about including this directory when deploying your code. See: [Deploying](get-started/deploying/aws) \ No newline at end of file diff --git a/fern/02-examples/interactive-examples.mdx b/fern/02-examples/interactive-examples.mdx new file mode 100644 index 000000000..f1aea02a0 --- /dev/null +++ b/fern/02-examples/interactive-examples.mdx @@ -0,0 +1,5 @@ +--- +title: Interactive Examples +--- + +Check out the [live examples](https://baml-examples.vercel.app/) that use NextJS, and the [source code on Github](https://github.com/boundaryml/baml-examples). \ No newline at end of file diff --git a/fern/03-reference/baml-cli/dev.mdx b/fern/03-reference/baml-cli/dev.mdx new file mode 100644 index 000000000..276a1eabe --- /dev/null +++ b/fern/03-reference/baml-cli/dev.mdx @@ -0,0 +1,25 @@ +The `dev` command starts a development server that watches your BAML source files for changes and automatically reloads the BAML runtime. This feature is designed to streamline the development process by providing real-time updates as you modify your BAML configurations. + + + + **Warning: Preview Feature** + + 1. You must include the `--preview` flag when running the `dev` command. + 2. Be aware that this feature is still being stabilized and may change in future releases. + + +## Usage + +``` +baml-cli dev [OPTIONS] --preview +``` + +## Details + +See the [serve](./serve) command for more information on the arguments. + +The dev command performs the exact same functionality, but it additionally: + +1. Watches the BAML source files for changes. +2. Automatically reloads the server when changes are detected. +3. Automatically runs any generators when changes are detected. \ No newline at end of file diff --git a/fern/03-reference/baml-cli/generate.mdx b/fern/03-reference/baml-cli/generate.mdx new file mode 100644 index 000000000..73da0de89 --- /dev/null +++ b/fern/03-reference/baml-cli/generate.mdx @@ -0,0 +1,53 @@ +The `generate` command is used to generate BAML clients based on your BAML source files. It processes the BAML configurations and creates the necessary client code for your specified output type. + +## Usage + +``` +baml-cli generate [OPTIONS] +``` + +## Options + +| Option | Description | Default | +|--------|-------------|---------| +| `--from ` | Path to the `baml_src` directory | `./baml_src` | +| `--no-version-check` | Generate `baml_client` without checking for version mismatch | `false` | + +## Description + +The `generate` command performs the following actions: + +1. Finds all generators in the BAML project (usualy in `generators.baml`). +2. Ensure all generators match the CLI version. +3. Generate each `baml_client` based on the generator configurations. + +## Examples + +1. Generate clients using default settings: + ``` + baml-cli generate + ``` + +2. Generate clients from a specific directory: + ``` + baml-cli generate --from /path/to/my/baml_src + ``` + +3. Generate clients without version check: + ``` + baml-cli generate --no-version-check + ``` + +## Output + +The command provides informative output about the generation process: + +- If no clients were generated, it will suggest a configuration to add to your BAML files. +- If clients were generated, it will report the number of clients generated and their locations. + + +## Notes + +- If no generator configurations are found in the BAML files, the command will generate a default client based on the CLI defaults and provide instructions on how to add a generator configuration to your BAML files. +- If generator configurations are found, the command will generate clients according to those configurations. +- If one of the generators fails, the command will stop at that point and report the error. diff --git a/fern/03-reference/baml-cli/init.mdx b/fern/03-reference/baml-cli/init.mdx new file mode 100644 index 000000000..2215b25c9 --- /dev/null +++ b/fern/03-reference/baml-cli/init.mdx @@ -0,0 +1,78 @@ + +The `init` command is used to initialize a project with BAML. It sets up the necessary directory structure and configuration files to get you started with BAML. + +## Usage + +``` +baml-cli init [OPTIONS] +``` + +## Options + +| Option | Description | Default | +|--------|-------------|---------| +| `--dest ` | Specifies where to initialize the BAML project | Current directory (`.`) | +| `--client-type ` | Type of BAML client to generate | Guesses based on where the CLI was installed from (`python/pydantic` for pip, `typescript` for npm, etc.) | +| `--openapi-client-type ` | The OpenAPI client generator to run, if `--client-type=openapi` | None | + +## Description + +The `init` command performs the following actions: + +1. Creates a new BAML project structure in `${DEST}/baml_src`. +2. Creates a `generators.baml` file in the `baml_src` directory with initial configuration. +3. Includes some additional examples files in `baml_src` to get you started. + +## Client Types + +The `--client-type` option allows you to specify the type of BAML client to generate. Available options include: + +- `python/pydantic`: For Python clients using Pydantic +- `typescript`: For TypeScript clients +- `ruby/sorbet`: For Ruby clients using Sorbet +- `rest/openapi`: For REST clients using OpenAPI + +If not specified, it uses the default from the runtime CLI configuration. + +## OpenAPI Client Types + +When using `--client-type=rest/openai`, you can specify the OpenAPI client generator using the `--openapi-client-type` option. Some examples include: + +- `go` +- `java` +- `php` +- `ruby` +- `rust` +- `csharp` + +For a full list of supported OpenAPI client types, refer to the [OpenAPI Generator documentation](https://github.com/OpenAPITools/openapi-generator#overview). + +## Examples + +1. Initialize a BAML project in the current directory with default settings: + ``` + baml init + ``` + +2. Initialize a BAML project in a specific directory: + ``` + baml init --dest /path/to/my/project + ``` + +3. Initialize a BAML project for Python with Pydantic: + ``` + baml init --client-type python/pydantic + ``` + +4. Initialize a BAML project for OpenAPI with a Go client: + ``` + baml init --client-type openapi --openapi-client-type go + ``` + +## Notes + +- If the destination directory already contains a `baml_src` directory, the command will fail to prevent overwriting existing projects. +- The command attempts to infer the OpenAPI generator command based on what's available in your system PATH. It checks for `openapi-generator`, `openapi-generator-cli`, or falls back to using `npx @openapitools/openapi-generator-cli`. +- After initialization, follow the instructions provided in the console output for language-specific setup steps. + +For more information on getting started with BAML, visit the [BAML documentation](https://docs.boundaryml.com/docs/get-started/quickstart). \ No newline at end of file diff --git a/fern/03-reference/baml-cli/serve.mdx b/fern/03-reference/baml-cli/serve.mdx new file mode 100644 index 000000000..13c29f756 --- /dev/null +++ b/fern/03-reference/baml-cli/serve.mdx @@ -0,0 +1,89 @@ +The `serve` command starts a BAML-over-HTTP API server that exposes your BAML functions via HTTP endpoints. This feature allows you to interact with your BAML functions through a RESTful API interface. + + + **Warning: Preview Feature** + + 1. You must include the `--preview` flag when running the `dev` command. + 2. Be aware that this feature is still being stabilized and may change in future releases. + + +## Usage + +``` +baml-cli serve [OPTIONS] --preview +``` + + +If you're actively developing, you can use the `dev` command to include hotreload functionality: +``` +baml-cli dev [OPTIONS] --preview +``` + +[See more](./dev) + + +## Options + +| Option | Description | Default | +|--------|-------------|---------| +| `--from ` | Path to the `baml_src` directory | `./baml_src` | +| `--port ` | Port to expose BAML on | `2024` | +| `--no-version-check` | Generate `baml_client` without checking for version mismatch | `false` | +| `--preview` | Enable the preview feature | | + +## Description + +The `serve` command performs the following actions: + +1. Exposes BAML functions as HTTP endpoints on the specified port. +2. Provides authentication middleware for secure access. + +## Endpoints + + +- `POST /call/:function_name`: Call a BAML function + +**Debugging** +- `GET /docs`: Interactive API documentation (Swagger UI) +- `GET /openapi.json`: OpenAPI specification for the BAML functions +- `GET /_debug/ping`: Health check endpoint +- `GET /_debug/status`: Server status and authentication check + +## Authentication + +We support the header: `x-baml-api-key` + +Set the `BAML_PASSWORD` environment variable to enable authentication. + +## Examples + +1. Start the server with default settings: + ``` + baml-cli serve --preview + ``` + +2. Start the server with a custom source directory and port: + ``` + baml-cli serve --from /path/to/my/baml_src --port 3000 --preview + ``` + +## Testing + +To test the server, you can use the following `curl` commands: + +1. Check if the server is running: + ```bash + curl http://localhost:2024/_debug/ping + ``` + +2. Call a function: + ```bash + curl -X POST http://localhost:2024/call/MyFunctionName -d '{"arg1": "value1", "arg2": "value2"}' + ``` + + ```bash API Key + curl -X POST http://localhost:2024/call/MyFunctionName -H "x-baml-api-key: ${BAML_PASSWORD}" -d '{"arg1": "value1", "arg2": "value2"}' + ``` + +3. Access the API documentation: + Open `http://localhost:2024/docs` in your web browser. diff --git a/fern/03-reference/baml/array.mdx b/fern/03-reference/baml/array.mdx new file mode 100644 index 000000000..1f2f29b90 --- /dev/null +++ b/fern/03-reference/baml/array.mdx @@ -0,0 +1,63 @@ +Allow you to store and manipulate collections of data. They can be declared in a concise and readable manner, supporting both single-line and multi-line formats. + +## Syntax + +To declare an array in a BAML file, you can use the following syntax: + +```baml +{ + key1 [value1, value2, value3], + key2 [ + value1, + value2, + value3 + ], + key3 [ + { + subkey1 "valueA", + subkey2 "valueB" + }, + { + subkey1 "valueC", + subkey2 "valueD" + } + ] +} +``` + +### Key Points: +- **Commas**: Optional for multi-line arrays, but recommended for clarity. +- **Nested Arrays**: Supported, allowing complex data structures. +- **Key-Value Pairs**: Arrays can contain objects with key-value pairs. + +## Usage Examples + +### Example 1: Simple Array + +```baml +function DescriptionGame(items: string[]) -> string { + client "openai/gpt-4o-mini" + prompt #" + What 3 words best describe all of these: {{ items }}. + "# +} + +test FruitList { + functions [DescriptionGame] + args { items ["apple", "banana", "cherry"] } +} +``` + +### Example 2: Multi-line Array + +```baml +test CityDescription { + functions [DescriptionGame] + args { items [ + "New York", + "Los Angeles", + "Chicago" + ] + } +} +``` diff --git a/fern/03-reference/baml/bool.mdx b/fern/03-reference/baml/bool.mdx new file mode 100644 index 000000000..0511817a8 --- /dev/null +++ b/fern/03-reference/baml/bool.mdx @@ -0,0 +1,22 @@ +`true` or `false` + +## Usage + +```baml +function CreateStory(long: bool) -> string { + client "openai/gpt-4o-mini" + prompt #" + Write a story that is {{ "10 paragraphs" if long else "1 paragraph" }} long. + "# +} + +test LongStory { + functions [CreateStory] + args { long true } +} + +test ShortStory { + functions [CreateStory] + args { long false } +} +``` diff --git a/fern/03-reference/baml/class.mdx b/fern/03-reference/baml/class.mdx new file mode 100644 index 000000000..fafb26d94 --- /dev/null +++ b/fern/03-reference/baml/class.mdx @@ -0,0 +1,115 @@ + +Classes consist of a name, a list of properties, and their [types](class). +In the context of LLMs, classes describe the type of the variables you can inject into prompts and extract out from the response. + + + Note properties have no `:` + + + +```baml Baml +class Foo { + property1 string + property2 int? + property3 Bar[] + property4 MyEnum +} +``` + +```python Python Equivalent +from pydantic import BaseModel +from path.to.bar import Bar +from path.to.my_enum import MyEnum + +class Foo(BaseModel): + property1: str + property2: Optional[int]= None + property3: List[Bar] + property4: MyEnum +``` + +```typescript Typescript Equivalent +import z from "zod"; +import { BarZod } from "./path/to/bar"; +import { MyEnumZod } from "./path/to/my_enum"; + +const FooZod = z.object({ + property1: z.string(), + property2: z.number().int().nullable().optional(), + property3: z.array(BarZod), + property4: MyEnumZod, +}); + +type Foo = z.infer; +``` + + + +## Field Attributes + +When prompt engineering, you can also alias values and add descriptions. + + +Aliasing renames the field for the llm to potentially "understand" your value better, while keeping the original name in your code, so you don't need to change your downstream code everytime. + +This will also be used for parsing the output of the LLM back into the original object. + + + +This adds some additional context to the field in the prompt. + + + +```baml BAML +class MyClass { + property1 string @alias("name") @description("The name of the object") + age int? @description("The age of the object") +} +``` + +## Class Attributes + + +If set, will allow you to add fields to the class dynamically at runtime (in your python/ts/etc code). See [dynamic classes](/guide/baml-advanced/dynamic-runtime-types) for more information. + + + +```baml BAML +class MyClass { + property1 string + property2 int? + + @@dynamic // allows me to later propert3 float[] at runtime +} +``` + +## Syntax + +Classes may have any number of properties. +Property names must follow these rules: +- Must start with a letter +- Must contain only letters, numbers, and underscores +- Must be unique within the class +- classes cannot be self-referential (cannot have a property of the same type as the class itself) + +The type of a property can be any [supported type](supported-types) + +### Default values + +- Not yet supported. For optional properties, the default value is `None` in python. + +### Dynamic classes + +See [Dynamic Types](/guide/baml-advanced/dynamic-runtime-types). + +## Inheritance + +Never supported. Like rust, we take the stance that [composition is better than inheritance](https://www.digitalocean.com/community/tutorials/composition-vs-inheritance). diff --git a/fern/03-reference/baml/client-llm.mdx b/fern/03-reference/baml/client-llm.mdx new file mode 100644 index 000000000..2492e7536 --- /dev/null +++ b/fern/03-reference/baml/client-llm.mdx @@ -0,0 +1,54 @@ + +Clients are used to configure how LLMs are called, like so: + +```rust BAML +function MakeHaiku(topic: string) -> string { + client "openai/gpt-4o" + prompt #" + Write a haiku about {{ topic }}. + "# +} +``` + +This is `/` shorthand for: + +```rust BAML +client MyClient { + provider "openai" + options { + model "gpt-4o" + // api_key defaults to env.OPENAI_API_KEY + } +} + +function MakeHaiku(topic: string) -> string { + client MyClient + prompt #" + Write a haiku about {{ topic }}. + "# +} +``` + +Consult the [provider documentation](#fields) for a list of supported providers +and models, and the default options. + +If you want to override options like `api_key` to use a different environment +variable, or you want to point `base_url` to a different endpoint, you should use +the latter form. + + +If you want to specify which client to use at runtime, in your Python/TS/Ruby code, +you can use the [client registry](/guide/baml-advanced/llm-client-registry) to do so. + +This can come in handy if you're trying to, say, send 10% of your requests to a +different model. + + +## Fields + + + + + The name of the retry policy. See [Retry + Policy](/ref/client-strategies/retry-policy). + diff --git a/fern/03-reference/baml/clients/providers/anthropic.mdx b/fern/03-reference/baml/clients/providers/anthropic.mdx new file mode 100644 index 000000000..266001d59 --- /dev/null +++ b/fern/03-reference/baml/clients/providers/anthropic.mdx @@ -0,0 +1,118 @@ +--- +title: anthropic +--- + + +The `anthropic` provider supports all APIs that use the same interface for the `/v1/messages` endpoint. + +Example: +```baml BAML +client MyClient { + provider anthropic + options { + model "claude-3-5-sonnet-20240620" + temperature 0 + } +} +``` + +The options are passed through directly to the API, barring a few. Here's a shorthand of the options: + +## Non-forwarded options + + Will be passed as a bearer token. **Default: `env.ANTHROPIC_API_KEY`** + + `Authorization: Bearer $api_key` + + + + The base URL for the API. **Default: `https://api.anthropic.com`** + + + + The default role for any prompts that don't specify a role. **Default: `system`** + + We don't have any checks for this field, you can pass any string you wish. + + + + Additional headers to send with the request. + + Unless specified with a different value, we inject in the following headers: + ``` + "anthropic-version" "2023-06-01" + ``` + +Example: +```baml +client MyClient { + provider anthropic + options { + api_key env.MY_ANTHROPIC_KEY + model "claude-3-5-sonnet-20240620" + headers { + "X-My-Header" "my-value" + } + } +} +``` + + + + +## Forwarded options + + BAML will auto construct this field for you from the prompt, if necessary. + Only the first system message will be used, all subsequent ones will be cast to the `assistant` role. + + + + BAML will auto construct this field for you from the prompt + + + + BAML will auto construct this field for you based on how you call the client in your code + + + + The model to use. + +| Model | +| --- | +| `claude-3-5-sonnet-20240620` | +| `claude-3-opus-20240229` | +| `claude-3-sonnet-20240229` | +| `claude-3-haiku-20240307` | + + + +See anthropic docs for the latest list of all models. You can pass any model name you wish, we will not check if it exists. + + + + The maximum number of tokens to generate. **Default: `4069`** + + + +For all other options, see the [official anthropic API documentation](https://docs.anthropic.com/en/api/messages). \ No newline at end of file diff --git a/fern/03-reference/baml/clients/providers/aws-bedrock.mdx b/fern/03-reference/baml/clients/providers/aws-bedrock.mdx new file mode 100644 index 000000000..d97957b11 --- /dev/null +++ b/fern/03-reference/baml/clients/providers/aws-bedrock.mdx @@ -0,0 +1,110 @@ +--- +title: aws-bedrock +subtitle: AWS Bedrock provider for BAML +--- + + +The `aws-bedrock` provider supports all text-output models available via the +[`Converse` API](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html). + +Example: + +```baml BAML +client MyClient { + provider aws-bedrock + options { + api_key env.MY_OPENAI_KEY + model "gpt-3.5-turbo" + temperature 0.1 + } +} +``` + +## Authorization + +We use the AWS SDK under the hood, which will respect [all authentication +mechanisms supported by the +SDK](https://docs.rs/aws-config/latest/aws_config/index.html), including but not +limited to: + + - `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` as set in your environment variables + - loading the specified `AWS_PROFILE` from `~/.aws/config` + - built-in authn for services running in EC2, ECS, Lambda, etc. + + +## Playground setup +Add these three environment variables to your extension variables to use the AWS Bedrock provider in the playground. + +- `AWS_ACCESS_KEY_ID` +- `AWS_SECRET_ACCESS_KEY` +- `AWS_REGION` - like `us-east-1` + + +## Non-forwarded options + + + The default role for any prompts that don't specify a role. **Default: `system`** + + We don't have any checks for this field, you can pass any string you wish. + + + + +## Forwarded options + + + BAML will auto construct this field for you from the prompt + + + + The model to use. + +| Model | Description | +| --------------- | ------------------------------ | +| `anthropic.claude-3-haiku-20240307-v1:0` | Fastest + Cheapest | +| `anthropic.claude-3-sonnet-20240307-v1:0` | Smartest | +| `meta.llama3-8b-instruct-v1:0` | | +| `meta.llama3-70b-instruct-v1:0` | | +| `mistral.mistral-7b-instruct-v0:2` | | +| `mistral.mixtral-8x7b-instruct-v0:1` | | + +Run `aws bedrock list-foundation-models | jq '.modelSummaries.[].modelId` to get +a list of available foundation models; you can also use any custom models you've +deployed. + +Note that to use any of these models you'll need to [request model access]. + +[request model access]: https://docs.aws.amazon.com/bedrock/latest/userguide/model-access.html + + + + +Additional inference configuration to send with the request; see [AWS Bedrock +documentation](https://docs.rs/aws-sdk-bedrockruntime/latest/aws_sdk_bedrockruntime/types/struct.InferenceConfiguration.html). + +Example: + +```baml BAML +client MyClient { + provider aws-bedrock + options { + inference_configuration { + max_tokens 1000 + temperature 1.0 + top_p 0.8 + stop_sequence ["_EOF"] + } + } +} +``` + + diff --git a/fern/03-reference/baml/clients/providers/azure.mdx b/fern/03-reference/baml/clients/providers/azure.mdx new file mode 100644 index 000000000..d521358e2 --- /dev/null +++ b/fern/03-reference/baml/clients/providers/azure.mdx @@ -0,0 +1,114 @@ +--- +title: azure-openai +--- + + +For `azure-openai`, we provide a client that can be used to interact with the OpenAI API hosted on Azure using the `/chat/completions` endpoint. + +Example: +```baml BAML +client MyClient { + provider azure-openai + options { + resource_name "my-resource-name" + deployment_id "my-deployment-id" + // Alternatively, you can use the base_url field + // base_url "https://my-resource-name.openai.azure.com/openai/deployments/my-deployment-id" + api_version "2024-02-01" + api_key env.AZURE_OPENAI_API_KEY + } +} +``` + + + `api_version` is required. Azure will return not found if the version is not specified. + + + +The options are passed through directly to the API, barring a few. Here's a shorthand of the options: + +## Non-forwarded options + + Will be injected via the header `API-KEY`. **Default: `env.AZURE_OPENAI_API_KEY`** + + `API-KEY: $api_key` + + + + The base URL for the API. **Default: `https://${resource_name}.openai.azure.com/openai/deployments/${deployment_id}`** + + May be used instead of `resource_name` and `deployment_id`. + + + + See the `base_url` field. + + + + See the `base_url` field. + + + + The default role for any prompts that don't specify a role. **Default: `system`** + + We don't have any checks for this field, you can pass any string you wish. + + + + Will be passed via a query parameter `api-version`. + + + + Additional headers to send with the request. + +Example: +```baml BAML +client MyClient { + provider azure-openai + options { + resource_name "my-resource-name" + deployment_id "my-deployment-id" + api_version "2024-02-01" + api_key env.AZURE_OPENAI_API_KEY + headers { + "X-My-Header" "my-value" + } + } +} +``` + + + + +## Forwarded options + + BAML will auto construct this field for you from the prompt + + + BAML will auto construct this field for you based on how you call the client in your code + + +For all other options, see the [official Azure API documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#chat-completions). diff --git a/fern/03-reference/baml/clients/providers/google-ai.mdx b/fern/03-reference/baml/clients/providers/google-ai.mdx new file mode 100644 index 000000000..6c499410f --- /dev/null +++ b/fern/03-reference/baml/clients/providers/google-ai.mdx @@ -0,0 +1,95 @@ +--- +title: google-ai +--- + + +The `google-ai` provider supports the `https://generativelanguage.googleapis.com/v1beta/models/{model_id}/generateContent` and `https://generativelanguage.googleapis.com/v1beta/models/{model_id}/streamGenerateContent` endpoints. + + +The use of `v1beta` rather than `v1` aligns with the endpoint conventions established in [Google's SDKs](https://github.com/google-gemini/generative-ai-python/blob/8a29017e9120f0552ee3ad6092e8545d1aa6f803/google/generativeai/client.py#L60) and offers access to both the existing `v1` models and additional models exclusive to `v1beta`. + + + +BAML will automatically pick `streamGenerateContent` if you call the streaming interface. + + +Example: +```baml BAML +client MyClient { + provider google-ai + options { + model "gemini-1.5-flash" + } +} +``` + +The options are passed through directly to the API, barring a few. Here's a shorthand of the options: +## Non-forwarded options + + Will be passed as the `x-goog-api-key` header. **Default: `env.GOOGLE_API_KEY`** + + `x-goog-api-key: $api_key` + + + + The base URL for the API. **Default: `https://generativelanguage.googleapis.com/v1beta`** + + + + The default role for any prompts that don't specify a role. **Default: `user`** + + We don't have any checks for this field, you can pass any string you wish. + + + + The model to use. **Default: `gemini-1.5-flash`** + + We don't have any checks for this field, you can pass any string you wish. + +| Model | Input(s) | Optimized for | +| --- | --- | --- | +| `gemini-1.5-pro` | Audio, images, videos, and text | Complex reasoning tasks such as code and text generation, text editing, problem solving, data extraction and generation | +| `gemini-1.5-flash` | Audio, images, videos, and text | Fast and versatile performance across a diverse variety of tasks | +| `gemini-1.0-pro` | Text | Natural language tasks, multi-turn text and code chat, and code generation | + +See the [Google Model Docs](https://ai.google.dev/gemini-api/docs/models/gemini) for the latest models. + + + + Additional headers to send with the request. + +Example: +```baml BAML +client MyClient { + provider google-ai + options { + model "gemini-1.5-flash" + headers { + "X-My-Header" "my-value" + } + } +} +``` + + + + +## Forwarded options + + BAML will auto construct this field for you from the prompt + + + +For all other options, see the [official Google Gemini API documentation](https://ai.google.dev/api/rest/v1beta/models/generateContent). diff --git a/fern/03-reference/baml/clients/providers/groq.mdx b/fern/03-reference/baml/clients/providers/groq.mdx new file mode 100644 index 000000000..5f2d6f35a --- /dev/null +++ b/fern/03-reference/baml/clients/providers/groq.mdx @@ -0,0 +1,20 @@ +--- +title: groq +--- + +[Groq](https://groq.com) supports the OpenAI client, allowing you to use the +[`openai-generic`](/docs/snippets/clients/providers/openai) provider with an +overridden `base_url`. + +See https://console.groq.com/docs/openai for more information. + +```baml BAML +client MyClient { + provider openai-generic + options { + base_url "https://api.groq.com/openai/v1" + api_key env.GROQ_API_KEY + model "llama3-70b-8192" + } +} +``` diff --git a/fern/03-reference/baml/clients/providers/huggingface.mdx b/fern/03-reference/baml/clients/providers/huggingface.mdx new file mode 100644 index 000000000..eae5fdb5c --- /dev/null +++ b/fern/03-reference/baml/clients/providers/huggingface.mdx @@ -0,0 +1,19 @@ +--- +title: huggingface +--- + +[HuggingFace](https://huggingface.co/) supports the OpenAI client, allowing you to use the +[`openai-generic`](/docs/snippets/clients/providers/openai) provider with an +overridden `base_url`. + +See https://huggingface.co/docs/inference-endpoints/index for more information on their Inference Endpoints. + +```baml BAML +client MyClient { + provider openai-generic + options { + base_url "https://api-inference.huggingface.co/v1" + api_key env.HUGGINGFACE_API_KEY + } +} +``` diff --git a/fern/03-reference/baml/clients/providers/keywordsai.mdx b/fern/03-reference/baml/clients/providers/keywordsai.mdx new file mode 100644 index 000000000..57ff23e65 --- /dev/null +++ b/fern/03-reference/baml/clients/providers/keywordsai.mdx @@ -0,0 +1,6 @@ +--- +title: Keywords AI +--- +Keywords AI is a proxying layer that allows you to route requests to hundreds of models. + +Follow the [Keywords AI + BAML Installation Guide](https://docs.keywordsai.co/integration/development-frameworks/baml) to get started! \ No newline at end of file diff --git a/fern/03-reference/baml/clients/providers/lmstudio.mdx b/fern/03-reference/baml/clients/providers/lmstudio.mdx new file mode 100644 index 000000000..c11b07ad1 --- /dev/null +++ b/fern/03-reference/baml/clients/providers/lmstudio.mdx @@ -0,0 +1,20 @@ +--- +title: vLLM +--- + +[LMStudio](https://lmstudio.ai/docs) supports the OpenAI client, allowing you +to use the [`openai-generic`](/docs/snippets/clients/providers/openai) provider +with an overridden `base_url`. + + +See https://lmstudio.ai/docs/local-server#make-an-inferencing-request-using-openais-chat-completions-format for more information. + +```baml BAML +client MyClient { + provider "openai-generic" + options { + base_url "http://localhost:1234/v1" + model "TheBloke/phi-2-GGUF" + } +} +``` diff --git a/fern/03-reference/baml/clients/providers/ollama.mdx b/fern/03-reference/baml/clients/providers/ollama.mdx new file mode 100644 index 000000000..d156c7a0d --- /dev/null +++ b/fern/03-reference/baml/clients/providers/ollama.mdx @@ -0,0 +1,98 @@ +--- +title: ollama +--- + + +[Ollama](https://ollama.com/) supports the OpenAI client, allowing you to use the +[`openai-generic`](/docs/snippets/clients/providers/openai) provider with an +overridden `base_url`. + + + Note that to call Ollama, you must use its OpenAI-compatible + `/v1` endpoint. See [Ollama's OpenAI compatibility + documentation](https://ollama.com/blog/openai-compatibility). + +You can try out BAML with Ollama at promptfiddle.com, by running `OLLAMA_ORIGINS='*' ollama serve`. Learn more in [here](https://www.boundaryml.com/blog/ollama-structured-output) + +```baml BAML +client MyClient { + provider "openai-generic" + options { + base_url "http://localhost:11434/v1" + model llama3 + } +} +``` + +The options are passed through directly to the API, barring a few. Here's a shorthand of the options: + +## Non-forwarded options + + The base URL for the API. **Default: `http://localhost:11434/v1`** + Note the `/v1` at the end of the URL. See [Ollama's OpenAI compatability](https://ollama.com/blog/openai-compatibility) + + + + The default role for any prompts that don't specify a role. **Default: `system`** + + We don't have any checks for this field, you can pass any string you wish. + + + + Additional headers to send with the request. + +Example: +```baml BAML +client MyClient { + provider ollama + options { + model "llama3" + headers { + "X-My-Header" "my-value" + } + } +} +``` + + + + +## Forwarded options + + BAML will auto construct this field for you from the prompt + + + BAML will auto construct this field for you based on how you call the client in your code + + + The model to use. + +| Model | Description | +| --- | --- | +| `llama3` | Meta Llama 3: The most capable openly available LLM to date | +| `qwen2` | Qwen2 is a new series of large language models from Alibaba group | +| `phi3` | Phi-3 is a family of lightweight 3B (Mini) and 14B (Medium) state-of-the-art open models by Microsoft | +| `aya` | Aya 23, released by Cohere, is a new family of state-of-the-art, multilingual models that support 23 languages. | +| `mistral` | The 7B model released by Mistral AI, updated to version 0.3. | +| `gemma` | Gemma is a family of lightweight, state-of-the-art open models built by Google DeepMind. Updated to version 1.1 | +| `mixtral` | A set of Mixture of Experts (MoE) model with open weights by Mistral AI in 8x7b and 8x22b parameter sizes. | + +For the most up-to-date list of models supported by Ollama, see their [Model Library](https://ollama.com/library). + +To use a specific version you would do: `"mixtral:8x22b"` + diff --git a/fern/03-reference/baml/clients/providers/openai-generic.mdx b/fern/03-reference/baml/clients/providers/openai-generic.mdx new file mode 100644 index 000000000..89929fb5a --- /dev/null +++ b/fern/03-reference/baml/clients/providers/openai-generic.mdx @@ -0,0 +1,90 @@ +--- +title: openai-generic +--- + + +The `openai-generic` provider supports all APIs that use OpenAI's request and +response formats, such as Groq, HuggingFace, Ollama, OpenRouter, and Together AI. + +Example: + +```baml BAML +client MyClient { + provider "openai-generic" + options { + base_url "https://api.provider.com" + model "" + } +} +``` + + +## Non-forwarded options + + + The base URL for the API. + + **Default: `https://api.openai.com/v1`** + + + + The default role for any prompts that don't specify a role. + + We don't do any validation of this field, so you can pass any string you wish. + + **Default: `system`** + + + + Will be used to build the `Authorization` header, like so: `Authorization: Bearer $api_key` + If `api_key` is not set, or is set to an empty string, the `Authorization` header will not be sent. + + **Default: ``** + + + + Additional headers to send with the request. + +Example: + +```baml BAML +client MyClient { + provider "openai-generic" + options { + base_url "https://api.provider.com" + model "" + headers { + "X-My-Header" "my-value" + } + } +} +``` + + + +## Forwarded options + + + BAML will auto construct this field for you from the prompt + + + BAML will auto construct this field for you based on how you call the client in your code + + + The model to use. + + For OpenAI, this might be `"gpt-4o-mini"`; for Ollama, this might be `"llama2"`. The exact + syntax will depend on your API provider's documentation: we'll just forward it to them as-is. + + + +For all other options, see the [official OpenAI API documentation](https://platform.openai.com/docs/api-reference/chat/create). diff --git a/fern/03-reference/baml/clients/providers/openai.mdx b/fern/03-reference/baml/clients/providers/openai.mdx new file mode 100644 index 000000000..b4e5532e7 --- /dev/null +++ b/fern/03-reference/baml/clients/providers/openai.mdx @@ -0,0 +1,106 @@ +--- +title: openai +--- + +The `openai` provider supports the OpenAI `/chat` endpoint, setting OpenAI-specific +default configuration options. + + + For Azure, we recommend using [`azure-openai`](azure) instead. + + For all other OpenAI-compatible API providers, such as Groq, HuggingFace, + Ollama, OpenRouter, Together AI, and others, we recommend using + [`openai-generic`](openai-generic) instead. + + +Example: + +```baml BAML +client MyClient { + provider "openai" + options { + api_key env.MY_OPENAI_KEY + model "gpt-3.5-turbo" + temperature 0.1 + } +} +``` + +The options are passed through directly to the API, barring a few. Here's a shorthand of the options: + +## Non-forwarded options + + + Will be used to build the `Authorization` header, like so: `Authorization: Bearer $api_key` + + **Default: `env.OPENAI_API_KEY`** + + + + The base URL for the API. + + **Default: `https://api.openai.com/v1`** + + + + The default role for any prompts that don't specify a role. + + We don't do any validation of this field, so you can pass any string you wish. + + **Default: `system`** + + + + Additional headers to send with the request. + +Example: + +```baml BAML +client MyClient { + provider openai + options { + api_key env.MY_OPENAI_KEY + model "gpt-3.5-turbo" + headers { + "X-My-Header" "my-value" + } + } +} +``` + + + + + +## Forwarded options + + + BAML will auto construct this field for you from the prompt + + + BAML will auto construct this field for you based on how you call the client in your code + + + The model to use. + +| Model | Description | +| --------------- | ------------------------------ | +| `gpt-3.5-turbo` | Fastest | +| `gpt-4o` | Fast + text + image | +| `gpt-4-turbo` | Smartest + text + image + code | +| `gpt-4o-mini` | Cheapest + text + image | + +See openai docs for the list of openai models. You can pass any model name you wish, we will not check if it exists. + + + +For all other options, see the [official OpenAI API documentation](https://platform.openai.com/docs/api-reference/chat/create). diff --git a/fern/03-reference/baml/clients/providers/openrouter.mdx b/fern/03-reference/baml/clients/providers/openrouter.mdx new file mode 100644 index 000000000..06f9745dd --- /dev/null +++ b/fern/03-reference/baml/clients/providers/openrouter.mdx @@ -0,0 +1,24 @@ +--- +title: openrouter +--- + +[OpenRouter](https://openrouter.ai) supports the OpenAI client, allowing you to use the +[`openai-generic`](/docs/snippets/clients/providers/openai) provider with an +overridden `base_url`. + + + +```baml BAML +client MyClient { + provider "openai-generic" + options { + base_url "https://openrouter.ai/api/v1" + api_key env.OPENROUTER_API_KEY + model "openai/gpt-3.5-turbo" + headers { + "HTTP-Referer" "YOUR-SITE-URL" // Optional + "X-Title" "YOUR-TITLE" // Optional + } + } +} +``` \ No newline at end of file diff --git a/fern/03-reference/baml/clients/providers/together.mdx b/fern/03-reference/baml/clients/providers/together.mdx new file mode 100644 index 000000000..ffe006c5f --- /dev/null +++ b/fern/03-reference/baml/clients/providers/together.mdx @@ -0,0 +1,20 @@ +--- +title: Together AI +--- + +[Together AI](https://www.together.ai/) supports the OpenAI client, allowing you +to use the [`openai-generic`](/docs/snippets/clients/providers/openai) provider +with an overridden `base_url`. + +See https://docs.together.ai/docs/openai-api-compatibility for more information. + +```baml BAML +client MyClient { + provider "openai-generic" + options { + base_url "https://api.together.ai/v1" + api_key env.TOGETHER_API_KEY + model "meta-llama/Llama-3-70b-chat-hf" + } +} +``` diff --git a/fern/03-reference/baml/clients/providers/unify.mdx b/fern/03-reference/baml/clients/providers/unify.mdx new file mode 100644 index 000000000..9140c321f --- /dev/null +++ b/fern/03-reference/baml/clients/providers/unify.mdx @@ -0,0 +1,21 @@ +--- +title: Unify AI +--- + +[Unify AI](https://www.unify.ai/) supports the OpenAI client, allowing you +to use the [`openai-generic`](/docs/snippets/clients/providers/openai) provider +with an overridden `base_url`. + +See https://docs.unify.ai/universal_api/making_queries#openai-python-package for more information. + +```baml BAML +client UnifyClient { + provider "openai-generic" + options { + base_url "https://api.unify.ai/v0" + api_key env.MY_UNIFY_API_KEY + model "llama-3.1-405b-chat@together-ai" + } +} +``` + diff --git a/fern/03-reference/baml/clients/providers/vertex.mdx b/fern/03-reference/baml/clients/providers/vertex.mdx new file mode 100644 index 000000000..ad9db52ce --- /dev/null +++ b/fern/03-reference/baml/clients/providers/vertex.mdx @@ -0,0 +1,278 @@ +--- +title: vertex-ai +--- + +The `vertex-ai` provider is used to interact with the Google Vertex AI services, specifically the following endpoints: + +``` +https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:generateContent +https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:streamGenerateContent +``` + + + + +Example: +```baml BAML +client MyClient { + provider vertex-ai + options { + model gemini-1.5-pro + project_id my-project-id + location us-central1 + } +} +``` +## Authorization +The `vertex-ai` provider uses the Google Cloud SDK to authenticate with a temporary access token. We generate these Google Cloud Authentication Tokens using Google Cloud service account credentials. We do not store this token, and it is only used for the duration of the request. + +### Instructions for downloading Google Cloud credentials +1. Go to the [Google Cloud Console](https://console.cloud.google.com/). +2. Click on the project you want to use. +3. Select the `IAM & Admin` section, and click on `Service Accounts`. +5. Select an existing service account or create a new one. +6. Click on the service account and select `Add Key`. +7. Choose the JSON key type and click `Create`. +9. Set the `GOOGLE_APPLICATION_CREDENTIALS` environment variable to the path of the file. + + +See the [Google Cloud Application Default Credentials Docs](https://cloud.google.com/docs/authentication/application-default-credentials) for more information. + +The `project_id` of your client object must match the `project_id` of your credentials file. + + + +The options are passed through directly to the API, barring a few. Here's a shorthand of the options: +## Non-forwarded options + + The base URL for the API. + + **Default: `https://{LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/{LOCATION}/publishers/google/models/ +`** + + Can be used in lieu of the **`project_id`** and **`location`** fields, to manually set the request URL. + + + + + Vertex requires a Google Cloud project ID for each request. See the [Google Cloud Project ID Docs](https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects) for more information. + + + + + + + Vertex requires a location for each request. Some locations may have different models avaiable. + + Common locations include: + - `us-central1` + - `us-west1` + - `us-east1` + - `us-south1` + + See the [Vertex Location Docs](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/locations#united-states) for all locations and supported models. + + + + + + Path to a JSON credentials file or a JSON object containing the credentials. + + **Default: `env.GOOGLE_APPLICATION_CREDENTIALS`** + + + In this case, the path is resolved relative to the CWD of your process. + + ```baml BAML + client Vertex { + provider vertex-ai + options { + model gemini-1.5-pro + project_id jane-doe-test-1 + location us-central1 + credentials 'path/to/credentials.json' + } + } + ``` + + + + ```baml BAML + client Vertex { + provider vertex-ai + options { + model gemini-1.5-pro + project_id jane-doe-mycompany-1 + location us-central1 + credentials { + ... + private_key "-----BEGIN PRIVATE KEY-----super-duper-secret-string\n-----END PRIVATE KEY-----\n" + client_email "jane_doe@mycompany.com" + ... + } + } + } + ``` + + + This field cannot be used in the BAML Playground. For the playground, use the **`credentials_content`** instead. + + + + + Overrides contents of the Google Cloud Application Credentials. **Default: `env.GOOGLE_APPLICATION_CREDENTIALS_CONTENT`** + + + +```json Credentials + { + "type": "service_account", + "project_id": "my-project-id", + "private_key_id": "string", + "private_key": "-----BEGIN PRIVATE KEY-----string\n-----END PRIVATE KEY-----\n", + "client_email": "john_doe@gmail.com", + "client_id": "123456", + "auth_uri": "https://accounts.google.com/o/oauth2/auth", + "token_uri": "https://oauth2.googleapis.com/token", + "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", + "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/...", + "universe_domain": "googleapis.com" + } +``` + + + + + Only use this for the BAML Playground only. Use **`credentials`** for your runtime code. + + + + + Directly set Google Cloud Authentication Token in lieu of token generation via **`env.GOOGLE_APPLICATION_CREDENTIALS`** or **`env.GOOGLE_APPLICATION_CREDENTIALS_CONTENT`** fields. + + + + + + + + The default role for any prompts that don't specify a role. **Default: `user`** + + + + + The Google model to use for the request. + + +| Model | Input(s) | Optimized for | +| --- | --- | --- | +| `gemini-1.5-pro` | Audio, images, videos, and text | Complex reasoning tasks such as code and text generation, text editing, problem solving, data extraction and generation | +| `gemini-1.5-flash` | Audio, images, videos, and text | Fast and versatile performance across a diverse variety of tasks | +| `gemini-1.0-pro` | Text | Natural language tasks, multi-turn text and code chat, and code generation | + +See the [Google Model Docs](https://ai.google.dev/gemini-api/docs/models/gemini) for the latest models. + + + + Additional headers to send with the request. + +Example: +```baml BAML +client MyClient { + provider vertex-ai + options { + model gemini-1.5-pro + project_id my-project-id + location us-central1 + // Additional headers + headers { + "X-My-Header" "my-value" + } + } +} +``` + + + + +## Forwarded options + + Safety settings to apply to the request. You can stack different safety settings with a new `safetySettings` header for each one. See the [Google Vertex API Request Docs](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference) for more information on what safety settings can be set. + +```baml BAML +client MyClient { + provider vertex-ai + options { + model gemini-1.5-pro + project_id my-project-id + location us-central1 + + safetySettings { + category HARM_CATEGORY_HATE_SPEECH + threshold BLOCK_LOW_AND_ABOVE + method SEVERITY + } + } +} +``` + + + + + + Generation configurations to apply to the request. See the [Google Vertex API Request Docs](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/inference) for more information on what properties can be set. +```baml BAML +client MyClient { + provider vertex-ai + options { + model gemini-1.5-pro + project_id my-project-id + location us-central1 + + generationConfig { + maxOutputTokens 100 + temperature 1 + } + } +} +``` + + + +For all other options, see the [official Vertex AI documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/start/quickstarts/quickstart-multimodal). + + + + + + diff --git a/fern/03-reference/baml/clients/providers/vllm.mdx b/fern/03-reference/baml/clients/providers/vllm.mdx new file mode 100644 index 000000000..0f7f240f0 --- /dev/null +++ b/fern/03-reference/baml/clients/providers/vllm.mdx @@ -0,0 +1,22 @@ +--- +title: vLLM +--- + +[vLLM](https://docs.vllm.ai/) supports the OpenAI client, allowing you +to use the [`openai-generic`](/docs/snippets/clients/providers/openai) provider +with an overridden `base_url`. + + +See https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html for more information. + +```baml BAML +client MyClient { + provider "openai-generic" + options { + base_url "http://localhost:8000/v1" + api_key "token-abc123" + model "NousResearch/Meta-Llama-3-8B-Instruct" + default_role "user" // Required for using VLLM + } +} +``` \ No newline at end of file diff --git a/fern/03-reference/baml/clients/strategy/fallback.mdx b/fern/03-reference/baml/clients/strategy/fallback.mdx new file mode 100644 index 000000000..78021e00b --- /dev/null +++ b/fern/03-reference/baml/clients/strategy/fallback.mdx @@ -0,0 +1,76 @@ +--- +title: fallback +--- + + +You can use the `fallback` provider to add more resilliancy to your application. + +A fallback will attempt to use the first client, and if it fails, it will try the second client, and so on. + +You can nest fallbacks inside of other fallbacks. + +```baml BAML +client SuperDuperClient { + provider fallback + options { + strategy [ + ClientA + ClientB + ClientC + ] + } +} +``` + +## Options + + + The list of client names to try in order. Cannot be empty. + + +## retry_policy + +Like any other client, you can specify a retry policy for the fallback client. See [retry_policy](retry-policy) for more information. + +The retry policy will test the fallback itself, after the entire strategy has failed. + +```baml BAML +client SuperDuperClient { + provider fallback + retry_policy MyRetryPolicy + options { + strategy [ + ClientA + ClientB + ClientC + ] + } +} +``` + +## Nesting multiple fallbacks + +You can nest multiple fallbacks inside of each other. The fallbacks will just chain as you would expect. + +```baml BAML +client SuperDuperClient { + provider fallback + options { + strategy [ + ClientA + ClientB + ClientC + ] + } +} + +client MegaClient { + provider fallback + options { + strategy [ + SuperDuperClient + ClientD + ] + } +} +``` \ No newline at end of file diff --git a/fern/03-reference/baml/clients/strategy/retry.mdx b/fern/03-reference/baml/clients/strategy/retry.mdx new file mode 100644 index 000000000..6b99a0abe --- /dev/null +++ b/fern/03-reference/baml/clients/strategy/retry.mdx @@ -0,0 +1,86 @@ +--- +title: retry_policy +--- + + +A retry policy can be attached to any `client` and will attempt to retry requests that fail due to a network error. + +```baml BAML +retry_policy MyPolicyName { + max_retries 3 +} +``` + +Usage: +```baml BAML +client MyClient { + provider anthropic + retry_policy MyPolicyName + options { + model "claude-3-sonnet-20240229" + api_key env.ANTHROPIC_API_KEY + } +} +``` + +## Fields + + Number of **additional** retries to attempt after the initial request fails. + + + + The strategy to use for retrying requests. Default is `constant_delay(delay_ms=200)`. + +| Strategy | Docs | Notes | +| --- | --- | --- | +| `constant_delay` | [Docs](#constant-delay) | | +| `exponential_backoff` | [Docs](#exponential-backoff) | | + +Example: +```baml BAML +retry_policy MyPolicyName { + max_retries 3 + strategy { + type constant_delay + delay_ms 200 + } +} +``` + + + +## Strategies + +### constant_delay + + Configures to the constant delay strategy. + + + + The delay in milliseconds to wait between retries. **Default: 200** + + + +### exponential_backoff + + Configures to the exponential backoff strategy. + + + + The initial delay in milliseconds to wait between retries. **Default: 200** + + + + The multiplier to apply to the delay after each retry. **Default: 1.5** + + + + The maximum delay in milliseconds to wait between retries. **Default: 10000** + \ No newline at end of file diff --git a/fern/03-reference/baml/clients/strategy/round-robin.mdx b/fern/03-reference/baml/clients/strategy/round-robin.mdx new file mode 100644 index 000000000..6cf55582c --- /dev/null +++ b/fern/03-reference/baml/clients/strategy/round-robin.mdx @@ -0,0 +1,86 @@ +--- +title: round-robin +--- + + +The `round_robin` provider allows you to distribute requests across multiple clients in a round-robin fashion. After each call, the next client in the list will be used. + +```baml BAML +client MyClient { + provider round-robin + options { + strategy [ + ClientA + ClientB + ClientC + ] + } +} +``` + +## Options + + + The list of client names to try in order. Cannot be empty. + + + + The index of the client to start with. + + **Default is `random(0, len(strategy))`** + + In the [BAML Playground](/docs/get-started/quickstart/editors-vscode), Default is `0`. + + +## retry_policy + +When using a retry_policy with a round-robin client, it will rotate the strategy list after each retry. + +```baml BAML +client MyClient { + provider round-robin + retry_policy MyRetryPolicy + options { + strategy [ + ClientA + ClientB + ClientC + ] + } +} +``` + +## Nesting multiple round-robin clients + +You can nest multiple round-robin clients inside of each other. The round-robin as you would expect. + +```baml BAML +client MyClient { + provider round-robin + options { + strategy [ + ClientA + ClientB + ClientC + ] + } +} + +client MegaClient { + provider round-robin + options { + strategy [ + MyClient + ClientD + ClientE + ] + } +} + +// Calling MegaClient will call: +// MyClient(ClientA) +// ClientD +// ClientE +// MyClient(ClientB) +// etc. +``` diff --git a/fern/03-reference/baml/comments.mdx b/fern/03-reference/baml/comments.mdx new file mode 100644 index 000000000..ecece577e --- /dev/null +++ b/fern/03-reference/baml/comments.mdx @@ -0,0 +1,51 @@ +## Single line / trailing comments + +Denoted by `//`. + +```baml +// hello there! +foo // this is a trailing comment +``` + +## Docstrings + +We have no special syntax for docstrings. Instead, we use comments. +Eventually, we'll support a `///` syntax for docstrings which will +also be used for generating documentation in `baml_client`. + +{/* ## Docstrings + +To add a docstring to any block, use `///`. + +```baml +/// This is a docstring for a class +class Foo { + /// This is a docstring for a property + property1 string +} +``` */} + +{/* ## Multiline comments + +Multiline comments are denoted via `{//` and `//}`. + +```baml +{// + this is a multiline comment + foo + bar +//} +``` */} + +## Comments in block strings + +See [Block Strings](./values/string#block-strings) for more information. + +```jinja +#" + My string. {# + This is a comment + #} + hi! +"# +``` diff --git a/fern/03-reference/baml/enum.mdx b/fern/03-reference/baml/enum.mdx new file mode 100644 index 000000000..64fa381df --- /dev/null +++ b/fern/03-reference/baml/enum.mdx @@ -0,0 +1,105 @@ +Enums are useful for classification tasks. BAML has helper functions that can help you serialize an enum into your prompt in a neatly formatted list (more on that later). + +To define your own custom enum in BAML: + + +```baml BAML +enum MyEnum { + Value1 + Value2 + Value3 +} +``` + +```python Python Equivalent +from enum import StrEnum + +class MyEnum(StrEnum): + Value1 = "Value1" + Value2 = "Value2" + Value3 = "Value3" +``` + +```typescript Typescript Equivalent +enum MyEnum { + Value1 = "Value1", + Value2 = "Value2", + Value3 = "Value3", +} +``` + + + +- You may have as many values as you'd like. +- Values may not be duplicated or empty. +- Values may not contain spaces or special characters and must not start with a number. + +## Enum Attributes + + +This is the name of the enum rendered in the prompt. + + + + +If set, will allow you to add/remove/modify values to the enum dynamically at runtime (in your python/ts/etc code). See [dynamic enums](/guide/baml-advanced/dynamic-runtime-types) for more information. + + + +```baml BAML +enum MyEnum { + Value1 + Value2 + Value3 + + @@alias("My Custom Enum") + @@dynamic // allows me to later skip Value2 at runtime +} +``` + +## Value Attributes + +When prompt engineering, you can also alias values and add descriptions, or even skip them. + + +Aliasing renames the values for the llm to potentially "understand" your value better, while keeping the original name in your code, so you don't need to change your downstream code everytime. + +This will also be used for parsing the output of the LLM back into the enum. + + + +This adds some additional context to the value in the prompt. + + + +Skip this value in the prompt and during parsing. + + + +```baml BAML +enum MyEnum { + Value1 @alias("complete_summary") @description("Answer in 2 sentences") + Value2 + Value3 @skip + Value4 @description(#" + This is a long description that spans multiple lines. + It can be useful for providing more context to the value. + "#) +} +``` + + +See more in [prompt syntax docs](/ref/prompt-syntax/what-is-jinja) diff --git a/fern/03-reference/baml/env-vars.mdx b/fern/03-reference/baml/env-vars.mdx new file mode 100644 index 000000000..c3b9a5ce0 --- /dev/null +++ b/fern/03-reference/baml/env-vars.mdx @@ -0,0 +1,28 @@ +To set a value to an environment variable, use the following syntax: + +```baml +env.YOUR_VARIABLE_NAME +``` + +Environment variables with spaces in their names are not supported. + +### Example + +Using an environment variable for API key: + +```baml +client MyCustomClient { + provider "openai" + options { + model "gpt-4o-mini" + // Set the API key using an environment variable + api_key env.MY_SUPER_SECRET_API_KEY + } +} +``` + +## Setting Environment Variables + + +## Error Handling +Errors for unset environment variables are only thrown when the variable is accessed. If your BAML project has 15 environment variables and 1 is used for the function you are calling, only that one environment variable will be checked for existence. diff --git a/fern/03-reference/baml/function.mdx b/fern/03-reference/baml/function.mdx new file mode 100644 index 000000000..ed92ddb02 --- /dev/null +++ b/fern/03-reference/baml/function.mdx @@ -0,0 +1,170 @@ +Functions in BAML define the contract between your application and AI models, providing type-safe interfaces for AI operations. + +## Overview + +A BAML function consists of: +- Input parameters with explicit types +- A return type specification +- An [LLM client](client-llm) +- A prompt (as a [block string](general-baml-syntax/string#block-strings)) + +```baml +function FunctionName(param: Type) -> ReturnType { + client ModelName + prompt #" + Template content + "# +} +``` + +## Function Declaration + +### Syntax + +```baml +function name(parameters) -> return_type { + client llm_specification + prompt block_string_specification +} +``` + +### Parameters + +- `name`: The function identifier (must start with a capital letter!) +- `parameters`: One or more typed parameters (e.g., `text: string`, `data: CustomType`) +- `return_type`: The type that the function guarantees to return (e.g., `string | MyType`) +- `llm_specification`: The LLM to use (e.g., `"openai/gpt-4o-mini"`, `GPT4Turbo`, `Claude2`) +- `block_string_specification`: The prompt template using Jinja syntax + +## Type System + +Functions leverage BAML's strong type system, supporting: + +### Built-in Types +- `string`: Text data +- `int`: Integer numbers +- `float`: Decimal numbers +- `bool`: True/false values +- `array`: Denoted with `[]` suffix (e.g., `string[]`) +- `map`: Key-value pairs (e.g., `map`) +- `literal`: Specific values (e.g., `"red" | "green" | "blue"`) +- [See all](types) + +### Custom Types + +Custom types can be defined using class declarations: + +```baml +class CustomType { + field1 string + field2 int + nested NestedType +} + +function ProcessCustomType(data: CustomType) -> ResultType { + // ... +} +``` + +## Prompt Templates + +### Jinja Syntax + +BAML uses Jinja templating for dynamic prompt generation: + +```baml +prompt #" + Input data: {{ input_data }} + + {% if condition %} + Conditional content + {% endif %} + + {{ ctx.output_format }} +"# +``` + +### Special Variables + +- `ctx.output_format`: Automatically generates format instructions based on return type +- `ctx.client`: Selected client and model name +- `_.role`: Define the role of the message chunk + +## Error Handling + +Functions automatically handle common AI model errors and provide type validation: + +- JSON parsing errors are automatically corrected +- Type mismatches are detected and reported +- Network and rate limit errors are propagated to the caller + +## Usage Examples + +### Basic Function + +```baml +function ExtractEmail(text: string) -> string { + client GPT4Turbo + prompt #" + Extract the email address from the following text: + {{ text }} + + {{ ctx.output_format }} + "# +} +``` + +### Complex Types + +```baml +class Person { + name string + age int + contacts Contact[] +} + +class Contact { + type "email" | "phone" + value string +} + +function ParsePerson(data: string) -> Person { + client "openai/gpt-4o" + prompt #" + {{ ctx.output_format }} + + {{ _.role('user') }} + {{ data }} + "# +} +``` + +## `baml_client` Integration + + + +```python Python +from baml_client import b +from baml_client.types import Person + +async def process() -> Person: + result = b.ParsePerson("John Doe, 30 years old...") + print(result.name) # Type-safe access + return result +``` + + +```typescript TypeScript +import { b } from 'baml-client'; +import { Person } from 'baml-client/types'; + +async function process(): Promise { + const result = await b.ParsePerson("John Doe, 30 years old..."); + console.log(result.name); // Type-safe access + return result; +} +``` + + + + diff --git a/fern/03-reference/baml/int-float.mdx b/fern/03-reference/baml/int-float.mdx new file mode 100644 index 000000000..d8f998c36 --- /dev/null +++ b/fern/03-reference/baml/int-float.mdx @@ -0,0 +1,41 @@ + +Numerical values as denoted more specifically in BAML. + +| Value | Description | +| --- | --- | +| `int` | Integer | +| `float` | Floating point number | + + +We support implicit casting of int -> float, but if you need something to explicitly be a float, use `0.0` instead of `0`. + + +## Usage + + +```baml +function DescribeCircle(radius: int | float, pi: float?) -> string { + client "openai/gpt-4o-mini" + prompt #" + Describe a circle with a radius of {{ radius }} units. + Include the area of the circle using pi as {{ pi or 3.14159 }}. + + What are some properties of the circle? + "# +} + +test CircleDescription { + functions [DescribeCircle] + // will be cast to int + args { radius 5 } +} + +test CircleDescription2 { + functions [DescribeCircle] + // will be cast to float + args { + radius 5.0 + pi 3.14 + } +} +``` diff --git a/fern/03-reference/baml/map.mdx b/fern/03-reference/baml/map.mdx new file mode 100644 index 000000000..d0af50768 --- /dev/null +++ b/fern/03-reference/baml/map.mdx @@ -0,0 +1,115 @@ +Map values (AKA Dictionaries) allow you to store key-value pairs. + +Most of BAML (clients, tests, classes, etc) is represented as a map. + +## Syntax + +To declare a map in a BAML file, you can use the following syntax: + +```baml +{ + key1 value1, + key2 { + nestedKey1 nestedValue1, + nestedKey2 nestedValue2 + } +} +``` + +### Key Points: +- **Colons**: Not used in BAML maps; keys and values are separated by spaces. +- **Value Types**: Maps can contain unquoted or quoted strings, booleans, numbers, and nested maps as values. +- **Classes**: Classes in BAML are represented as maps with keys and values. + +## Usage Examples + +### Example 1: Simple Map + +```baml + +class Person { + name string + age int + isEmployed bool +} + +function DescribePerson(person: Person) -> string { + client "openai/gpt-4o-mini" + prompt #" + Describe the person with the following details: {{ person }}. + "# +} + +test PersonDescription { + functions [DescribePerson] + args { + person { + name "John Doe", + age 30, + isEmployed true + } + } +} +``` + +### Example 2: Nested Map + +```baml + +class Company { + name string + location map + employeeCount int +} + +function DescribeCompany(company: Company) -> string { + client "openai/gpt-4o-mini" + prompt #" + Describe the company with the following details: {{ company }}. + "# +} + +test CompanyDescription { + functions [DescribeCompany] + args { + company { + name "TechCorp", + location { + city "San Francisco", + state "California" + }, + employeeCount 500 + } + } +} +``` + +### Example 3: Map with Multiline String + +```baml +class Project { + title string + description string +} + +function DescribeProject(project: Project) -> string { + client "openai/gpt-4o-mini" + prompt #" + Describe the project with the following details: {{ project }}. + "# +} + +test ProjectDescription { + functions [DescribeProject] + args { + project { + title "AI Research", + description #" + This project focuses on developing + advanced AI algorithms to improve + machine learning capabilities. + "# + } + } +} +``` diff --git a/fern/03-reference/baml/prompt-syntax/comments.mdx b/fern/03-reference/baml/prompt-syntax/comments.mdx new file mode 100644 index 000000000..02acc6a44 --- /dev/null +++ b/fern/03-reference/baml/prompt-syntax/comments.mdx @@ -0,0 +1,4 @@ +--- +--- + +Use `{# ... #}` inside the `prompt` to add comments diff --git a/fern/03-reference/baml/prompt-syntax/conditionals.mdx b/fern/03-reference/baml/prompt-syntax/conditionals.mdx new file mode 100644 index 000000000..1c4ca20b3 --- /dev/null +++ b/fern/03-reference/baml/prompt-syntax/conditionals.mdx @@ -0,0 +1,17 @@ +--- +title: Conditionals +--- + +Use conditional statements to control the flow and output of your templates based on conditions: + +```jinja +function MyFunc(user: User) -> string { + prompt #" + {% if user.is_active %} + Welcome back, {{ user.name }}! + {% else %} + Please activate your account. + {% endif %} + "# +} +``` diff --git a/fern/03-reference/baml/prompt-syntax/ctx.mdx b/fern/03-reference/baml/prompt-syntax/ctx.mdx new file mode 100644 index 000000000..46f62d96b --- /dev/null +++ b/fern/03-reference/baml/prompt-syntax/ctx.mdx @@ -0,0 +1,39 @@ +--- +title: ctx (accessing metadata) +--- + + +If you try rendering `{{ ctx }}` into the prompt (literally just write that out!), you'll see all the metadata we inject to run this prompt within the playground preview. + +In the earlier tutorial we mentioned `ctx.output_format`, which contains the schema, but you can also access client information: + + +## Usecase: Conditionally render based on client provider + +In this example, we render the list of messages in XML tags if the provider is Anthropic (as they recommend using them as delimiters). See also [template_string](/ref/baml/template-string) as it's used in here. + +```baml +template_string RenderConditionally(messages: Message[]) #" + {% for message in messages %} + {%if ctx.client.provider == "anthropic" %} + {{ message.user_name }}: {{ message.content }} + {% else %} + {{ message.user_name }}: {{ message.content }} + {% endif %} + {% endfor %} +"# + +function MyFuncWithGPT4(messages: Message[]) -> string { + client GPT4o + prompt #" + {{ RenderConditionally(messages)}} + "# +} + +function MyFuncWithAnthropic(messages: Message[]) -> string { + client Claude35 + prompt #" + {{ RenderConditionally(messages )}} + #" +} +``` \ No newline at end of file diff --git a/fern/03-reference/baml/prompt-syntax/loops.mdx b/fern/03-reference/baml/prompt-syntax/loops.mdx new file mode 100644 index 000000000..23e758370 --- /dev/null +++ b/fern/03-reference/baml/prompt-syntax/loops.mdx @@ -0,0 +1,44 @@ +--- +title: Loops +--- + +Here's how you can iterate over a list of items, accessing each item's attributes: + +```jinja +function MyFunc(messages: Message[]) -> string { + prompt #" + {% for message in messages %} + {{ message.user_name }}: {{ message.content }} + {% endfor %} + "# +} +``` + +## loop + +Jinja provides a `loop` object that can be used to access information about the loop. Here are some of the attributes of the `loop` object: + + +| Variable | Description | +|------------------|-----------------------------------------------------------------------------| +| loop.index | The current iteration of the loop. (1 indexed) | +| loop.index0 | The current iteration of the loop. (0 indexed) | +| loop.revindex | The number of iterations from the end of the loop (1 indexed) | +| loop.revindex0 | The number of iterations from the end of the loop (0 indexed) | +| loop.first | True if first iteration. | +| loop.last | True if last iteration. | +| loop.length | The number of items in the sequence. | +| loop.cycle | A helper function to cycle between a list of sequences. See the explanation below. | +| loop.depth | Indicates how deep in a recursive loop the rendering currently is. Starts at level 1 | +| loop.depth0 | Indicates how deep in a recursive loop the rendering currently is. Starts at level 0 | +| loop.previtem | The item from the previous iteration of the loop. Undefined during the first iteration. | +| loop.nextitem | The item from the following iteration of the loop. Undefined during the last iteration. | +| loop.changed(*val) | True if previously called with a different value (or not called at all). | + +```jinja2 +prompt #" + {% for item in items %} + {{ loop.index }}: {{ item }} + {% endfor %} +"# +``` \ No newline at end of file diff --git a/fern/03-reference/baml/prompt-syntax/output-format.mdx b/fern/03-reference/baml/prompt-syntax/output-format.mdx new file mode 100644 index 000000000..9fdea0936 --- /dev/null +++ b/fern/03-reference/baml/prompt-syntax/output-format.mdx @@ -0,0 +1,133 @@ +--- +title: ctx.output_format +--- + + +`{{ ctx.output_format }}` is used within a prompt template (or in any template_string) to print out the function's output schema into the prompt. It describes to the LLM how to generate a structure BAML can parse (usually JSON). + +Here's an example of a function with `{{ ctx.output_format }}`, and how it gets rendered by BAML before sending it to the LLM. + +**BAML Prompt** + +```baml +class Resume { + name string + education Education[] +} +function ExtractResume(resume_text: string) -> Resume { + prompt #" + Extract this resume: + --- + {{ resume_text }} + --- + + {{ ctx.output_format }} + "# +} +``` + +**Rendered prompt** + +```text +Extract this resume +--- +Aaron V. +Bachelors CS, 2015 +UT Austin +--- + +Answer in JSON using this schema: +{ + name: string + education: [ + { + school: string + graduation_year: string + } + ] +} +``` + +## Controlling the output_format + +`ctx.output_format` can also be called as a function with parameters to customize how the schema is printed, like this: +```text + +{{ ctx.output_format(prefix="If you use this schema correctly and I'll tip $400:\n", always_hoist_enums=true)}} +``` + +Here's the parameters: + +The prefix instruction to use before printing out the schema. + +```text +Answer in this schema correctly I'll tip $400: +{ + ... +} +``` +BAML's default prefix varies based on the function's return type. + +| Fuction return type | Default Prefix | +| --- | --- | +| Primitive (String) | | +| Primitive (Other) | `Answer as a: ` | +| Enum | `Answer with any of the categories:\n` | +| Class | `Answer in JSON using this schema:\n` | +| List | `Answer with a JSON Array using this schema:\n` | +| Union | `Answer in JSON using any of these schemas:\n` | +| Optional | `Answer in JSON using this schema:\n` | + + + + +Whether to inline the enum definitions in the schema, or print them above. **Default: false** + + +**Inlined** +``` + +Answer in this json schema: +{ + categories: "ONE" | "TWO" | "THREE" +} +``` + +**hoisted** +``` +MyCategory +--- +ONE +TWO +THREE + +Answer in this json schema: +{ + categories: MyCategory +} +``` + +BAML will always hoist if you add a [description](/docs/snippets/enum#aliases-descriptions) to any of the enum values. + + + + + +**Default: ` or `** + +If a type is a union like `string | int` or an optional like `string?`, this indicates how it's rendered. + + +BAML renders it as `property: string or null` as we have observed some LLMs have trouble identifying what `property: string | null` means (and are better with plain english). + +You can always set it to ` | ` or something else for a specific model you use. + + +## Why BAML doesn't use JSON schema format in prompts +BAML uses "type definitions" or "jsonish" format instead of the long-winded json-schema format. +The tl;dr is that json schemas are +1. 4x more inefficient than "type definitions". +2. very unreadable by humans (and hence models) +3. perform worse than type definitions (especially on deeper nested objects or smaller models) + +Read our [full article on json schema vs type definitions](https://www.boundaryml.com/blog/type-definition-prompting-baml) diff --git a/fern/03-reference/baml/prompt-syntax/role.mdx b/fern/03-reference/baml/prompt-syntax/role.mdx new file mode 100644 index 000000000..1221a87b3 --- /dev/null +++ b/fern/03-reference/baml/prompt-syntax/role.mdx @@ -0,0 +1,84 @@ +--- +title: _.role +--- + + +BAML prompts are compiled into a `messages` array (or equivalent) that most LLM providers use: + +BAML Prompt -> `[{ role: "user": content: "hi there"}, { role: "assistant", ...}]` + +By default, BAML puts everything into a single message with the `system` role if available (or whichever one is best for the provider you have selected). +When in doubt, the playground always shows you the current role for each message. + +To specify a role explicitly, add the `{{ _.role("user")}}` syntax to the prompt +```rust +prompt #" + {{ _.role("system") }} Everything after + this element will be a system prompt! + + {{ _.role("user")}} + And everything after this + will be a user role +"# +``` +Try it out in [PromptFiddle](https://www.promptfiddle.com) + + + BAML may change the default role to `user` if using specific APIs that only support user prompts, like when using prompts with images. + + +We use `_` as the prefix of `_.role()` since we plan on adding more helpers here in the future. + +## Example -- Using `_.role()` in for-loops + +Here's how you can inject a list of user/assistant messages and mark each as a user or assistant role: + +```rust BAML +class Message { + role string + message string +} + +function ChatWithAgent(input: Message[]) -> string { + client GPT4o + prompt #" + {% for m in messages %} + {{ _.role(m.role) }} + {{ m.message }} + {% endfor %} + "# +} +``` + +```rust BAML +function ChatMessages(messages: string[]) -> string { + client GPT4o + prompt #" + {% for m in messages %} + {{ _.role("user" if loop.index % 2 == 1 else "assistant") }} + {{ m }} + {% endfor %} + "# +} +``` + +## Example -- Using `_.role()` in a template string + +```baml BAML +template_string YouAreA(name: string, job: string) #" + {{ _.role("system") }} + You are an expert {{ name }}. {{ job }} + + {{ ctx.output_format }} + {{ _.role("user") }} +"# + +function CheckJobPosting(post: string) -> bool { + client GPT4o + prompt #" + {{ YouAreA("hr admin", "You're role is to ensure every job posting is bias free.") }} + + {{ post }} + "# +} +``` diff --git a/fern/03-reference/baml/prompt-syntax/variables.mdx b/fern/03-reference/baml/prompt-syntax/variables.mdx new file mode 100644 index 000000000..2e30c406f --- /dev/null +++ b/fern/03-reference/baml/prompt-syntax/variables.mdx @@ -0,0 +1,6 @@ +--- +title: Variables +--- + + +See [template_string](/ref/baml/template-string) to learn how to add variables in .baml files \ No newline at end of file diff --git a/fern/03-reference/baml/prompt-syntax/what-is-jinja.mdx b/fern/03-reference/baml/prompt-syntax/what-is-jinja.mdx new file mode 100644 index 000000000..97c95ab03 --- /dev/null +++ b/fern/03-reference/baml/prompt-syntax/what-is-jinja.mdx @@ -0,0 +1,83 @@ +--- +title: What is Jinja / Cookbook +--- + + +BAML Prompt strings are essentially [Minijinja](https://docs.rs/minijinja/latest/minijinja/filters/index.html#functions) templates, which offer the ability to express logic and data manipulation within strings. Jinja is a very popular and mature templating language amongst Python developers, so Github Copilot or another LLM can already help you write most of the logic you want. + +## Jinja Cookbook + +When in doubt -- use the BAML VSCode Playground preview. It will show you the fully rendered prompt, even when it has complex logic. + +### Basic Syntax + +- `{% ... %}`: Use for executing statements such as for-loops or conditionals. +- `{{ ... }}`: Use for outputting expressions or variables. +- `{# ... #}`: Use for comments within the template, which will not be rendered. + +### Loops / Iterating Over Lists + +Here's how you can iterate over a list of items, accessing each item's attributes: + +```jinja Jinja +function MyFunc(messages: Message[]) -> string { + prompt #" + {% for message in messages %} + {{ message.user_name }}: {{ message.content }} + {% endfor %} + "# +} +``` + +### Conditional Statements + +Use conditional statements to control the flow and output of your templates based on conditions: + +```jinja Jinja +function MyFunc(user: User) -> string { + prompt #" + {% if user.is_active %} + Welcome back, {{ user.name }}! + {% else %} + Please activate your account. + {% endif %} + "# +} +``` + +### Setting Variables + +You can define and use variables within your templates to simplify expressions or manage data: + +```jinja +function MyFunc(items: Item[]) -> string { + prompt #" + {% set total_price = 0 %} + {% for item in items %} + {% set total_price = total_price + item.price %} + {% endfor %} + Total price: {{ total_price }} + "# +} +``` + +### Including other Templates + +To promote reusability, you can include other templates within a template. See [template strings](/ref/baml/template_string): + +```baml +template_string PrintUserInfo(arg1: string, arg2: User) #" + {{ arg1 }} + The user's name is: {{ arg2.name }} +"# + +function MyFunc(arg1: string, user: User) -> string { + prompt #" + Here is the user info: + {{ PrintUserInfo(arg1, user) }} + "# +} +``` + +### Built-in filters +See [jinja docs](https://jinja.palletsprojects.com/en/3.1.x/templates/#list-of-builtin-filters) \ No newline at end of file diff --git a/fern/03-reference/baml/string.mdx b/fern/03-reference/baml/string.mdx new file mode 100644 index 000000000..eb89c4ef1 --- /dev/null +++ b/fern/03-reference/baml/string.mdx @@ -0,0 +1,85 @@ +BAML treats templatized strings as first-class citizens. + +## Quoted Strings + +These is a valid **inline string**, which is surrounded by double quotes. They behave like regular strings in most programming languages, and can be escaped with a backslash. + +These cannot have template variables or expressions inside them. Use a block string for that. + + +```rust +"Hello World" + +"\n" +``` + +## Unquoted Strings + +BAML also supports simple **unquoted in-line** strings. The string below is valid! These are useful for simple strings such as configuration options. + +```rust +Hello World +``` + +Unquoted strings **may not** have any of the following since they are reserved characters (note this may change in the future): + +- Quotes "double" or 'single' +- At-signs @ +- Curlies {} +- hashtags # +- Parentheses () +- Brackets [] +- commas , +- newlines + +When in doubt, use a quoted string or a block string, but the VSCode extension will warn you if there is a parsing issue. + +## Block Strings + +If a string is on multiple lines, it must be surrounded by #" and "#. This is called a **block string**. + +```rust +#" +Hello +World +"# +``` + +Block strings are automatically dedented and stripped of the first and last newline. This means that the following will render the same thing as above + +```rust +#" + Hello + World +"# +``` + +When used for templating, block strings can contain expressions and variables using [Jinja](https://jinja.palletsprojects.com/en/3.0.x/templates/) syntax. + +```rust +template_string Greeting(name: string) #" + Hello {{ name }}! +"# +``` + +### Escape Characters + +Escaped characters are injected as is into the string. + +```rust +#"\n"# +``` + +This will render as `\\n` in the output. + +### Adding a `"#` +To include a `"#` in a block string, you can prefix it with a different count of `#`. + + +```baml +###" + #"Hello"# +"### +``` + +This will render as `#"Hello"#`. diff --git a/fern/03-reference/baml/template_string.mdx b/fern/03-reference/baml/template_string.mdx new file mode 100644 index 000000000..e84dbeb67 --- /dev/null +++ b/fern/03-reference/baml/template_string.mdx @@ -0,0 +1,35 @@ + +Writing prompts requires a lot of string manipulation. BAML has a `template_string` to let you combine different string templates together. Under-the-hood they use [jinja](/ref/prompt-syntax/what-is-jinja) to evaluate the string and its inputs. + +Think of template strings as functions that have variables, and return a string. They can be used to define reusable parts of a prompt, or to make the prompt more readable by breaking it into smaller parts. + +Example +```baml BAML +// Inject a list of "system" or "user" messages into the prompt. +template_string PrintMessages(messages: Message[]) #" + {% for m in messages %} + {{ _.role(m.role) }} + {{ m.message }} + {% endfor %} +"# + +function ClassifyConversation(messages: Message[]) -> Category[] { + client GPT4Turbo + prompt #" + Classify this conversation: + {{ PrintMessages(messages) }} + + Use the following categories: + {{ ctx.output_format}} + "# +} +``` + +In this example we can call the template_string `PrintMessages` to subdivide the prompt into "user" or "system" messages using `_.role()` (see [message roles](/ref/prompt-syntax/role)). This allows us to reuse the logic for printing messages in multiple prompts. + +You can nest as many template strings inside each other and call them however many times you want. + + + The BAML linter may give you a warning when you use template strings due to a static analysis limitation. You can ignore this warning. If it renders in the playground, you're good! + +Use the playground preview to ensure your template string is being evaluated correctly! diff --git a/fern/03-reference/baml/test.mdx b/fern/03-reference/baml/test.mdx new file mode 100644 index 000000000..7956bb3b8 --- /dev/null +++ b/fern/03-reference/baml/test.mdx @@ -0,0 +1,221 @@ +Tests are first-class citizens in BAML, designed to make testing AI functions straightforward and robust. BAML tests can be written anywhere in your codebase and run with minimal setup. + +## Overview + +A BAML test consists of: +- Test name and metadata +- Functions under test +- Input arguments +- Optional testing configuration +- Optional assertions + +```baml +test TestName { + functions [FunctionName] + args { + paramName "value" + } +} +``` + +## Test Declaration + +### Syntax + +```baml +test name { + functions [function_list] + args { + parameter_assignments + } +} +``` + +### Components + +- `name`: Test identifier (unique per function) +- `functions`: List of functions to test +- `args`: Input parameters for the test case + +## Input Types + +### Basic Types + +Simple values are provided directly: + +```baml +test SimpleTest { + functions [ClassifyMessage] + args { + input "Can't access my account" + } +} +``` + +### Complex Objects + +Objects are specified using nested structures: + +```baml +test ComplexTest { + functions [ProcessMessage] + args { + message { + user "john_doe" + content "Hello world" + metadata { + timestamp 1234567890 + priority "high" + } + } + } +} +``` + +### Arrays + +Arrays use bracket notation: + +```baml +test ArrayTest { + functions [BatchProcess] + args { + messages [ + { + user "user1" + content "Message 1" + } + { + user "user2" + content "Message 2" + } + ] + } +} +``` + +## Media Inputs + +### Images + +Images can be specified using three methods: + +1. **File Reference** +```baml {4-6} +test ImageFileTest { + functions [AnalyzeImage] + args { + param { + file "../images/test.png" + } + } +} +``` + +2. **URL Reference** +```baml {4-6} +test ImageUrlTest { + functions [AnalyzeImage] + args { + param { + url "https://example.com/image.jpg" + } + } +} +``` + +3. **Base64 Data** +```baml {4-7} +test ImageBase64Test { + functions [AnalyzeImage] + args { + param { + base64 "a41f..." + media_type "image/png" + } + } +} +``` + +### Audio + +Similar to images, audio can be specified in three ways: + +1. **File Reference** +```baml +test AudioFileTest { + functions [TranscribeAudio] + args { + audio { + file "../audio/sample.mp3" + } + } +} +``` + +2. **URL Reference** +```baml +test AudioUrlTest { + functions [TranscribeAudio] + args { + audio { + url "https://example.com/audio.mp3" + } + } +} +``` + +3. **Base64 Data** +```baml +test AudioBase64Test { + functions [TranscribeAudio] + args { + audio { + base64 "..." + media_type "audio/mp3" + } + } +} +``` + +## Multi-line Strings + +For long text inputs, use the block string syntax: + +```baml +test LongTextTest { + functions [AnalyzeText] + args { + content #" + This is a multi-line + text input that preserves + formatting and whitespace + "# + } +} +``` + +## Testing Multiple Functions + +This requires each function to have teh exact same parameters: + +```baml +test EndToEndFlow { + functions [ + ExtractInfo + ProcessInfo + ValidateResult + ] + args { + input "test data" + } +} +``` + +## Integration with Development Tools + +### VSCode Integration + +- Tests can be run directly from the BAML playground +- Real-time syntax validation +- Test result visualization diff --git a/fern/03-reference/baml/types.mdx b/fern/03-reference/baml/types.mdx new file mode 100644 index 000000000..c94edc7dd --- /dev/null +++ b/fern/03-reference/baml/types.mdx @@ -0,0 +1,435 @@ + + + +Here's a list of all the types that can be represented in BAML: + +## Primitive Types +* `bool` +* `int` +* `float` +* `string` +* `null` + +## Literal Types + + This feature was added in: v0.61.0. + + +The primitive types `string`, `int` and `bool` can be constrained to a specific value. +For example, you can use literal values as return types: + +```rust +function ClassifyIssue(issue_description: string) -> "bug" | "enhancement" { + client GPT4Turbo + prompt #" + Classify the issue based on the following description: + {{ ctx.output_format }} + + {{ _.role("user")}} + {{ issue_description }} + "# +} +``` + +See [Union(|)](#union-) for more details. + + +## Multimodal Types +See [calling a function with multimodal types](/docs/snippets/calling-baml/multi-modal) +and [testing image inputs](/docs/snippets/test-cases#images) + + + BAML's multimodal types are designed for ease of use: we have deliberately made it + easy for you to construct a `image` or `audio` instance from a URL. Under the + hood, depending on the model you're using, BAML may need to download the image + and transcode it (usually as base64) for the model to consume. + + This ease-of-use does come with some tradeoffs; namely, if you construct + an `image` or `audio` instance using untrusted user input, you may be exposing + yourself to [server-side request forgery (SSRF) attacks][ssrf]. Attackers may be + able to fetch files on your internal network, on external networks using your + application's identity, or simply excessively drive up your cloud network + bandwidth bill. + + To prevent this, we recommend only using URLs from trusted sources/users or + validating them using allowlists or denylists. + +[ssrf]: https://portswigger.net/web-security/ssrf + + +### `image` + +You can use an image like this for models that support them: + +```rust +function DescribeImage(myImg: image) -> string { + client GPT4Turbo + prompt #" + {{ _.role("user")}} + Describe the image in four words: + {{ myImg }} + "# +} +``` + +You cannot name a variable `image` at the moment as it is a reserved keyword. + +Calling a function with an image type: + + +```python Python +from baml_py import Image +from baml_client import b + +async def test_image_input(): + # from URL + res = await b.TestImageInput( + img=Image.from_url("https://upload.wikimedia.org/wikipedia/en/4/4d/Shrek_%28character%29.png") + ) + + # Base64 image + image_b64 = "iVBORw0K...." + res = await b.TestImageInput( + img=Image.from_base64("image/png", image_b64) + ) +``` + +```typescript TypeScript +import { b } from '../baml_client' +import { Image } from "@boundaryml/baml" +... + + // URL + let res = await b.TestImageInput( + Image.fromUrl('https://upload.wikimedia.org/wikipedia/en/4/4d/Shrek_%28character%29.png'), + ) + + // Base64 + let res = await b.TestImageInput( + Image.fromBase64('image/png', image_b64), + ) +``` + +```ruby Ruby +require_relative "baml_client/client" + +b = Baml.Client +Image = Baml::Image + +def test_image_input + # from URL + res = b.TestImageInput( + img: Image.from_url("https://upload.wikimedia.org/wikipedia/en/4/4d/Shrek_%28character%29.png") + ) + + # Base64 image + image_b64 = "iVBORw0K...." + res = b.TestImageInput( + img: Image.from_base64("image/png", image_b64) + ) +end +``` + + + +If using Pydantic, the following are valid ways to construct the `Image` type. + +```json +{ + "url": "https://upload.wikimedia.org/wikipedia/en/4/4d/Shrek_%28character%29.png" +} +``` + +```json +{ + "url": "https://upload.wikimedia.org/wikipedia/en/4/4d/Shrek_%28character%29.png", + "media_type": "image/png" +} +``` + +```json +{ + "base64": "iVBORw0K....", +} +``` + +```json +{ + "base64": "iVBORw0K....", + "media_type": "image/png" +} +``` + + +### `audio` + +Example +```rust +function DescribeSound(myAudio: audio) -> string { + client GPT4Turbo + prompt #" + {{ _.role("user")}} + Describe the audio in four words: + {{ myAudio }} + "# +} +``` +Calling functions that have `audio` types. + + +```python Python +from baml_py import Audio +from baml_client import b + +async def run(): + # from URL + res = await b.TestAudioInput( + audio=Audio.from_url( + "https://actions.google.com/sounds/v1/emergency/beeper_emergency_call.ogg" + ) + ) + + # Base64 + b64 = "iVBORw0K...." + res = await b.TestAudioInput( + audio=Audio.from_base64("audio/ogg", b64) + ) +``` + +```typescript TypeScript +import { b } from '../baml_client' +import { Audio } from "@boundaryml/baml" +... + + // URL + let res = await b.TestAudioInput( + Audio.fromUrl('https://actions.google.com/sounds/v1/emergency/beeper_emergency_call.ogg'), + ) + + // Base64 + const audio_base64 = ".." + let res = await b.TestAudioInput( + Audio.fromBase64('audio/ogg', audio_base64), + ) + +``` + +```ruby Ruby +require_relative "baml_client/client" + +b = Baml.Client +Audio = Baml::Audio + +def test_audio_input + # from URL + res = b.TestAudioInput( + audio: Audio.from_url( + "https://actions.google.com/sounds/v1/emergency/beeper_emergency_call.ogg" + ) + ) + + # Base64 image + audio_b64 = "iVBORw0K...." + res = b.TestAudioInput( + audio: Audio.from_base64("audio/mp3", audio_b64) + ) +end +``` + + +## Composite/Structured Types + +### enum + +**See also:** [Enum](/docs/snippets/enum) + +A user-defined type consisting of a set of named constants. +Use it when you need a model to choose from a known set of values, like in classification problems + +```baml +enum Name { + Value1 + Value2 @description("My optional description annotation") +} +``` + +If you need to add new variants, because they need to be loaded from a file or fetched dynamically +from a database, you can do this with [Dynamic Types](/guide/baml-advanced/dynamic-runtime-types). + +### class + +**See also:** [Class](/docs/snippets/class) + +Classes are for user-defined complex data structures. + +Use when you need an LLM to call another function (e.g. OpenAI's function calling), you can model the function's parameters as a class. You can also get models to return complex structured data by using a class. + +**Example:** + +Note that properties have no `:` +```baml +class Car { + model string + year int @description("Year of manufacture") +} +``` + +If you need to add fields to a class because some properties of your class are only +known at runtime, you can do this with [Dynamic Types](/docs/calling-baml/dynamic-types). + +### Optional (?) + +A type that represents a value that might or might not be present. + +Useful when a variable might not have a value and you want to explicitly handle its absence. + +**Syntax:** `Type?` + +**Example:** `int?` or `(MyClass | int)?` + +### Union (|) + +A type that can hold one of several specified types. + +This can be helpful with **function calling**, where you want to return different types of data depending on which function should be called. + +**Syntax:** `Type1 | Type2` + +**Example:** `int | string` or `(int | string) | MyClass` or `string | MyClass | int[]` + + + Order is important. `int | string` is not the same as `string | int`. + + For example, if you have a `"1"` string, it will be parsed as an `int` if + you use `int | string`, but as a `string` if you use `string | int`. + + +### List/Array ([]) + +A collection of elements of the same type. + +**Syntax:** `Type[]` + +**Example:** `string[]` or `(int | string)[]` or `int[][]` + + + * Array types can be nested to create multi-dimensional arrays + * An array type cannot be optional + + +### Map + +A mapping of strings to elements of another type. + +**Syntax**: `map` + +**Example**: `map` + +{/* + For TS users: `map` will generate a + `Record` type annotation, but using any other type for the + key will generate a `Map`, e.g. `map` in BAML will generate a + `Map` type annotation in TypeScript. + */} + +### ❌ Set + +- Not yet supported. Use a `List` instead. + +### ❌ Tuple + +- Not yet supported. Use a `class` instead. + +## Examples and Equivalents + +Here are some examples and what their equivalents are in different languages. + +### Example 1 + + +```baml BAML +int? | string[] | MyClass +```` + +```python Python Equivalent +Union[Optional[int], List[str], MyClass] +``` + +```typescript TypeScript Equivalent +(number | null) | string[] | MyClass +``` + + + +### Example 2 + + +```baml BAML +string[] +``` + +```python Python Equivalent +List[str] +``` + +```typescript TypeScript Equivalent +string[] +``` + + + +### Example 3 + + +```baml BAML +(int | float)[] +``` +```python Python Equivalent +List[Union[int, float]] +``` + +```typescript TypeScript Equivalent +number[] +``` + + + +### Example 4 + + +```baml BAML +(int? | string[] | MyClass)[] +``` + +```python Python Equivalent +Optional[List[Union[Optional[int], List[str], MyClass]]] +``` + +```typescript TypeScript Equivalent +((number | null) | string[] | MyClass)[] +``` + + + +### Example 5 + + +```baml BAML +"str" | 1 | false +``` + +```python Python Equivalent +Union[Literal["str"], Literal[1], Literal[False]] +``` + +```typescript TypeScript Equivalent +"str" | 1 | false +``` + + + +## ⚠️ Unsupported +- `any/json` - Not supported. We don't want to encourage its use as it defeats the purpose of having a type system. if you really need it, for now use `string` and call `json.parse` yourself or use [dynamic types](/guide/baml-advanced/dynamic-runtime-types) +- `datetime` - Not yet supported. Use a `string` instead. +- `duration` - Not yet supported. We recommend using `string` and specifying that it must be an "ISO8601 duration" in the description, which you can parse yourself into a duration. +- `units (currency, temperature)` - Not yet supported. Use a number (`int` or `float`) and have the unit be part of the variable name. For example, `temperature_fahrenheit` and `cost_usd` (see [@alias](/ref/baml/class)) diff --git a/fern/03-reference/baml_client/typebuilder.mdx b/fern/03-reference/baml_client/typebuilder.mdx new file mode 100644 index 000000000..460fe19eb --- /dev/null +++ b/fern/03-reference/baml_client/typebuilder.mdx @@ -0,0 +1,338 @@ +--- +title: TypeBuilder +--- + + +`TypeBuilder` is used to create or modify output schemas at runtime. It's particularly useful when you have dynamic output structures that can't be determined at compile time - like categories from a database or user-provided schemas. + +Here's a simple example of using TypeBuilder to add new enum values before calling a BAML function: + +**BAML Code** +```baml {4} +enum Category { + RED + BLUE + @@dynamic // Makes this enum modifiable at runtime +} + +function Categorize(text: string) -> Category { + prompt #" + Categorize this text: + {{ text }} + + {{ ctx.output_format }} + "# +} +``` + +**Runtime Usage** + +```python Python +from baml_client.type_builder import TypeBuilder +from baml_client import b + +# Create a TypeBuilder instance +tb = TypeBuilder() + +# Add new values to the Category enum +tb.Category.add_value('GREEN') +tb.Category.add_value('YELLOW') + +# Pass the typebuilder when calling the function +result = await b.Categorize("The sun is bright", {"tb": tb}) +# result can now be RED, BLUE, GREEN, or YELLOW +``` +```typescript TypeScript +import { TypeBuilder } from '../baml_client/type_builder' +import { b } from '../baml_client' + +// Create a TypeBuilder instance +const tb = new TypeBuilder() + +// Add new values to the Category enum +tb.Category.addValue('GREEN') +tb.Category.addValue('YELLOW') + +// Pass the typebuilder when calling the function +const result = await b.Categorize("The sun is bright", { tb }) +// result can now be RED, BLUE, GREEN, or YELLOW +``` +```ruby Ruby +require_relative 'baml_client/client' + +# Create a TypeBuilder instance +tb = Baml::TypeBuilder.new + +# Add new values to the Category enum +tb.Category.add_value('GREEN') +tb.Category.add_value('YELLOW') + +# Pass the typebuilder when calling the function +result = Baml::Client.categorize(text: "The sun is bright", baml_options: { tb: tb }) +# result can now be RED, BLUE, GREEN, or YELLOW +``` + + +## Dynamic Types + +There are two ways to use TypeBuilder: +1. Modifying existing BAML types marked with `@@dynamic` +2. Creating entirely new types at runtime + +### Modifying Existing Types + +To modify an existing BAML type, mark it with `@@dynamic`: + + +```baml +class User { + name string + age int + @@dynamic // Allow adding more properties +} +``` + +**Runtime Usage** + +```python Python +tb = TypeBuilder() +tb.User.add_property('email', tb.string()) +tb.User.add_property('address', tb.string()) +``` +```typescript TypeScript +const tb = new TypeBuilder() +tb.User.addProperty('email', tb.string()) +tb.User.addProperty('address', tb.string()) +``` +```ruby Ruby +tb = Baml::TypeBuilder.new +tb.User.add_property('email', tb.string) +tb.User.add_property('address', tb.string) +``` + + + + +```baml +enum Category { + VALUE1 + VALUE2 + @@dynamic // Allow adding more values +} +``` + +**Runtime Usage** + +```python Python +tb = TypeBuilder() +tb.Category.add_value('VALUE3') +tb.Category.add_value('VALUE4') +``` +```typescript TypeScript +const tb = new TypeBuilder() +tb.Category.addValue('VALUE3') +tb.Category.addValue('VALUE4') +``` +```ruby Ruby +tb = Baml::TypeBuilder.new +tb.Category.add_value('VALUE3') +tb.Category.add_value('VALUE4') +``` + + + +### Creating New Types + +You can also create entirely new types at runtime: + + +```python Python +tb = TypeBuilder() + +# Create a new enum +hobbies = tb.add_enum("Hobbies") +hobbies.add_value("Soccer") +hobbies.add_value("Reading") + +# Create a new class +address = tb.add_class("Address") +address.add_property("street", tb.string()) +address.add_property("city", tb.string()) + +# Attach new types to existing BAML type +tb.User.add_property("hobbies", hobbies.type().list()) +tb.User.add_property("address", address.type()) +``` +```typescript TypeScript +const tb = new TypeBuilder() + +// Create a new enum +const hobbies = tb.addEnum("Hobbies") +hobbies.addValue("Soccer") +hobbies.addValue("Reading") + +// Create a new class +const address = tb.addClass("Address") +address.addProperty("street", tb.string()) +address.addProperty("city", tb.string()) + +// Attach new types to existing BAML type +tb.User.addProperty("hobbies", hobbies.type().list()) +tb.User.addProperty("address", address.type()) +``` +```ruby Ruby +tb = Baml::TypeBuilder.new + +# Create a new enum +hobbies = tb.add_enum("Hobbies") +hobbies.add_value("Soccer") +hobbies.add_value("Reading") + +# Create a new class +address = tb.add_class("Address") +address.add_property("street", tb.string) +address.add_property("city", tb.string) + +# Attach new types to existing BAML type +tb.User.add_property("hobbies", hobbies.type.list) +tb.User.add_property("address", address.type) +``` + + +## Type Builders + +TypeBuilder provides methods for building different kinds of types: + +| Method | Description | Example | +|--------|-------------|---------| +| `string()` | Creates a string type | `tb.string()` | +| `int()` | Creates an integer type | `tb.int()` | +| `float()` | Creates a float type | `tb.float()` | +| `bool()` | Creates a boolean type | `tb.bool()` | +| `list()` | Makes a type into a list | `tb.string().list()` | +| `optional()` | Makes a type optional | `tb.string().optional()` | + +## Adding Descriptions + +You can add descriptions to properties and enum values to help guide the LLM: + + +```python Python +tb = TypeBuilder() + +# Add description to a property +tb.User.add_property("email", tb.string()) \ + .description("User's primary email address") + +# Add description to an enum value +tb.Category.add_value("URGENT") \ + .description("Needs immediate attention") +``` +```typescript TypeScript +const tb = new TypeBuilder() + +// Add description to a property +tb.User.addProperty("email", tb.string()) + .description("User's primary email address") + +// Add description to an enum value +tb.Category.addValue("URGENT") + .description("Needs immediate attention") +``` +```ruby Ruby +tb = Baml::TypeBuilder.new + +# Add description to a property +tb.User.add_property("email", tb.string) + .description("User's primary email address") + +# Add description to an enum value +tb.Category.add_value("URGENT") + .description("Needs immediate attention") +``` + + +## Common Patterns + +Here are some common patterns when using TypeBuilder: + +1. **Dynamic Categories**: When categories come from a database or external source + +```python Python +categories = fetch_categories_from_db() +tb = TypeBuilder() +for category in categories: + tb.Category.add_value(category) +``` +```typescript TypeScript +const categories = await fetchCategoriesFromDb() +const tb = new TypeBuilder() +categories.forEach(category => { + tb.Category.addValue(category) +}) +``` +```ruby Ruby +categories = fetch_categories_from_db +tb = Baml::TypeBuilder.new +categories.each do |category| + tb.Category.add_value(category) +end +``` + + +2. **Form Fields**: When extracting dynamic form fields + +```python Python +fields = get_form_fields() +tb = TypeBuilder() +form = tb.add_class("Form") +for field in fields: + form.add_property(field.name, tb.string()) +``` +```typescript TypeScript +const fields = getFormFields() +const tb = new TypeBuilder() +const form = tb.addClass("Form") +fields.forEach(field => { + form.addProperty(field.name, tb.string()) +}) +``` +```ruby Ruby +fields = get_form_fields +tb = Baml::TypeBuilder.new +form = tb.add_class("Form") +fields.each do |field| + form.add_property(field.name, tb.string) +end +``` + + +3. **Optional Properties**: When some fields might not be present + +```python Python +tb = TypeBuilder() +tb.User.add_property("middle_name", tb.string().optional()) +``` +```typescript TypeScript +const tb = new TypeBuilder() +tb.User.addProperty("middle_name", tb.string().optional()) +``` +```ruby Ruby +tb = Baml::TypeBuilder.new +tb.User.add_property("middle_name", tb.string.optional) +``` + + + +All types added through TypeBuilder must be connected to the return type of your BAML function. Standalone types that aren't referenced won't affect the output schema. + + +## Future Features + +We're working on additional features for TypeBuilder: + +- JSON Schema support (awaiting use cases) +- OpenAPI schema integration +- Pydantic model support + +If you're interested in these features, please join the discussion in our GitHub issues. \ No newline at end of file diff --git a/fern/03-reference/extract/examples.mdx b/fern/03-reference/extract/examples.mdx new file mode 100644 index 000000000..ffee44c52 --- /dev/null +++ b/fern/03-reference/extract/examples.mdx @@ -0,0 +1,240 @@ +--- +title: Examples +--- + +### Upload a File (PDF, images) + + + + +```python title="Python" +import requests +from typing import List, Dict, Any + +def extract_data(api_key: str, file_paths: List[str], prompt: str) -> Dict[str, Any]: + url = "https://api2.boundaryml.com/v3/extract" + headers = { + "Authorization": f"Bearer {api_key}" + } + files = [('files', open(file_path, 'rb')) for file_path in file_paths] + data = { + 'prompt': prompt + } + response = requests.post(url, headers=headers, files=files, data=data) + response.raise_for_status() + return response.json() + +# Usage example +api_key = 'your_api_key_here' +file_paths = ['path/to/file1.pdf', 'path/to/file2.png'] +prompt = 'Please extract the text content.' + +result = extract_data(api_key, file_paths, prompt) +print(result) +``` + +```typescript title="TypeScript" +import axios, { AxiosResponse } from 'axios'; +import * as FormData from 'form-data'; +import * as fs from 'fs'; + +interface ExtractResponse { + extractions: Extraction[]; + usage: Usage; + request_id: string; +} + +interface Extraction { + source: Source; + output: any; +} + +interface Source { + type: string; + name?: string; + page?: number; +} + +interface Usage { + consumed_chars: number; + produced_chars: number; + consumed_megapixels: number; +} + +async function extractData(apiKey: string, filePaths: string[], prompt: string): Promise { + const url = 'https://api2.boundaryml.com/v3/extract'; + const formData = new FormData(); + + filePaths.forEach(filePath => { + formData.append('files', fs.createReadStream(filePath)); + }); + formData.append('prompt', prompt); + + const headers = { + ...formData.getHeaders(), + 'Authorization': `Bearer ${apiKey}`, + }; + + const response: AxiosResponse = await axios.post(url, formData, { headers }); + return response.data; +} + +// Usage example +const apiKey = 'your_api_key_here'; +const filePaths = ['path/to/file1.pdf', 'path/to/file2.png']; +const prompt = 'Please extract the text content.'; + +extractData(apiKey, filePaths, prompt) + .then(result => console.log(result)) + .catch(error => console.error(error)); +``` + +```ruby title="Ruby" +require 'net/http' +require 'uri' +require 'json' + +def extract_data(api_key, file_paths, prompt) + uri = URI.parse('https://api2.boundaryml.com/v3/extract') + request = Net::HTTP::Post.new(uri) + request['Authorization'] = "Bearer #{api_key}" + + form_data = [['prompt', prompt]] + file_paths.each do |file_path| + form_data << ['files', File.open(file_path)] + end + + request.set_form(form_data, 'multipart/form-data') + + req_options = { + use_ssl: uri.scheme == 'https', + } + + response = Net::HTTP.start(uri.hostname, uri.port, req_options) do |http| + http.request(request) + end + + if response.is_a?(Net::HTTPSuccess) + JSON.parse(response.body) + else + raise "Request failed: #{response.code} #{response.message}" + end +end + +# Usage example +api_key = 'your_api_key_here' +file_paths = ['path/to/file1.pdf', 'path/to/file2.png'] +prompt = 'Please extract the text content.' + +result = extract_data(api_key, file_paths, prompt) +puts result +``` + +```go title="Go" +package main + +import ( + "bytes" + "encoding/json" + "fmt" + "io" + "mime/multipart" + "net/http" + "os" +) + +type ExtractResponse struct { + Extractions []Extraction `json:"extractions"` + Usage Usage `json:"usage"` + RequestID string `json:"request_id"` +} + +type Extraction struct { + Source Source `json:"source"` + Output interface{} `json:"output"` +} + +type Source struct { + Type string `json:"type"` + Name string `json:"name,omitempty"` + Page int `json:"page,omitempty"` +} + +type Usage struct { + ConsumedChars int `json:"consumed_chars"` + ProducedChars int `json:"produced_chars"` + ConsumedMegapixels float64 `json:"consumed_megapixels"` +} + +func extractData(apiKey string, filePaths []string, prompt string) (ExtractResponse, error) { + url := "https://api2.boundaryml.com/v3/extract" + body := &bytes.Buffer{} + writer := multipart.NewWriter(body) + + for _, filePath := range filePaths { + file, err := os.Open(filePath) + if err != nil { + return ExtractResponse{}, err + } + defer file.Close() + + part, err := writer.CreateFormFile("files", file.Name()) + if err != nil { + return ExtractResponse{}, err + } + _, err = io.Copy(part, file) + if err != nil { + return ExtractResponse{}, err + } + } + + _ = writer.WriteField("prompt", prompt) + err := writer.Close() + if err != nil { + return ExtractResponse{}, err + } + + req, err := http.NewRequest("POST", url, body) + if err != nil { + return ExtractResponse{}, err + } + req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", apiKey)) + req.Header.Set("Content-Type", writer.FormDataContentType()) + + client := &http.Client{} + resp, err := client.Do(req) + if err != nil { + return ExtractResponse{}, err + } + defer resp.Body.Close() + + if resp.StatusCode != http.StatusOK { + return ExtractResponse{}, fmt.Errorf("Request failed with status %s", resp.Status) + } + + var extractResponse ExtractResponse + err = json.NewDecoder(resp.Body).Decode(&extractResponse) + if err != nil { + return ExtractResponse{}, err + } + + return extractResponse, nil +} + +func main() { + apiKey := "your_api_key_here" + filePaths := []string{"path/to/file1.pdf", "path/to/file2.png"} + prompt := "Please extract the text content." + + result, err := extractData(apiKey, filePaths, prompt) + if err != nil { + fmt.Println("Error:", err) + return + } + + fmt.Printf("Result: %+v\n", result) +} + +``` + + \ No newline at end of file diff --git a/fern/03-reference/extract/summary.mdx b/fern/03-reference/extract/summary.mdx new file mode 100644 index 000000000..1425be661 --- /dev/null +++ b/fern/03-reference/extract/summary.mdx @@ -0,0 +1,11 @@ +--- +title: Overview +--- + +We leveraged our expertise in structured data extraction to create a general purpose extraction API that is independent of BAML. + +If you are interested in converting PDF documents, invoices, images into readable structured data, this API is for you. + + +To try it out visit our [Dashboard v2](https://dashboard.boundaryml.com). Note that this is a different website from the current tracing/observability dashboard (app.boundaryml.com). We are working on unifying the two. + diff --git a/fern/03-reference/generator.mdx b/fern/03-reference/generator.mdx new file mode 100644 index 000000000..0fc65957c --- /dev/null +++ b/fern/03-reference/generator.mdx @@ -0,0 +1,79 @@ + + +Each `generator` that you define in your BAML project will tell `baml-cli +generate` to generate code for a specific target language. You can define +multiple `generator` clauses in your BAML project, and `baml-cli generate` will +generate code for each of them. + +If you created your project using `baml-cli init`, then one has already been generated for you! + + + + +```baml Python +generator target { + // Valid values: "python/pydantic", "typescript", "ruby/sorbet" + output_type "python/pydantic" + + // Where the generated code will be saved (relative to baml_src/) + output_dir "../" + + // What interface you prefer to use for the generated code (sync/async) + // Both are generated regardless of the choice, just modifies what is exported + // at the top level + default_client_mode "sync" + + // Version of runtime to generate code for (should match installed baml-py version) + version "0.63.0" +} +``` + +```baml TypeScript +generator target { + // Valid values: "python/pydantic", "typescript", "ruby/sorbet" + output_type "typescript" + + // Where the generated code will be saved (relative to baml_src/) + output_dir "../" + + // What interface you prefer to use for the generated code (sync/async) + // Both are generated regardless of the choice, just modifies what is exported + // at the top level + default_client_mode "async" + + // Version of runtime to generate code for (should match the package @boundaryml/baml version) + version "0.63.0" +} +``` + +```baml Ruby (beta) +generator target { + // Valid values: "python/pydantic", "typescript", "ruby/sorbet" + output_type "ruby/sorbet" + + // Where the generated code will be saved (relative to baml_src/) + output_dir "../" + + // Version of runtime to generate code for (should match installed `baml` package version) + version "0.63.0" +} +``` + +```baml OpenAPI +generator target { + // Valid values: "python/pydantic", "typescript", "ruby/sorbet", "rest/openapi" + output_type "rest/openapi" + + // Where the generated code will be saved (relative to baml_src/) + output_dir "../" + + // Version of runtime to generate code for (should match installed `baml` package version) + version "0.54.0" + + // 'baml-cli generate' will run this after generating openapi.yaml, to generate your OpenAPI client + // This command will be run from within $output_dir + on_generate "npx @openapitools/openapi-generator-cli generate -i openapi.yaml -g OPENAPI_CLIENT_TYPE -o ." +} +``` + + \ No newline at end of file diff --git a/fern/03-reference/overview.mdx b/fern/03-reference/overview.mdx new file mode 100644 index 000000000..9de0a85c6 --- /dev/null +++ b/fern/03-reference/overview.mdx @@ -0,0 +1,36 @@ +--- +title: BAML Reference +--- + +Welcome to the BAML reference guide! + +Here you can learn about every BAML keyword, feature, and setting. + +For more in-depth explanations, we recommend reading the [Guides](/guide) first. + + + + Learn everything about BAML's language features. + + + + Learn about BAML's Jinja prompt syntax. + + + + BAML CLI commands and flags. + + + + VSCode BAML Extension settings + + + + LLM clients and how to configure them. + + + + API Reference for the `baml_client` object. + + + diff --git a/fern/03-reference/vscode-ext/clipath.mdx b/fern/03-reference/vscode-ext/clipath.mdx new file mode 100644 index 000000000..37a98361b --- /dev/null +++ b/fern/03-reference/vscode-ext/clipath.mdx @@ -0,0 +1,21 @@ +| Type | Value | +| --- | --- | +| `string \| null` | null | + + + +If set, all generated code will use this instead of the packaged generator shipped with the extension. + + +We recommend this setting! This prevents mismatches between the VSCode Extension and the installed BAML package. + + +## Usage + +If you use unix, you can run `where baml-cli` in your project to figure out what the path is. + +```json settings.json +{ + "baml.cliPath": "/path/to/baml-cli" +} +``` diff --git a/fern/03-reference/vscode-ext/generateCodeOnSave.mdx b/fern/03-reference/vscode-ext/generateCodeOnSave.mdx new file mode 100644 index 000000000..444a72e8d --- /dev/null +++ b/fern/03-reference/vscode-ext/generateCodeOnSave.mdx @@ -0,0 +1,21 @@ +| Type | Default Value | +| --- | --- | +| `"always" \| "never"` | "always" | + + +- `always`: Generate code for `baml_client` on every save +- `never`: Do not generate `baml_client` on any save + +If you have a generator of type `rest/*`, `"always"` will not do any code generation. You will have to manually run: + +``` +path/to/baml-cli generate +``` + +## Usage + +```json settings.json +{ + "baml.generateCodeOnSave": "never", +} +``` diff --git a/fern/03-reference/vscode-ext/restartTSServerOnSave.mdx b/fern/03-reference/vscode-ext/restartTSServerOnSave.mdx new file mode 100644 index 000000000..6aa0c98ab --- /dev/null +++ b/fern/03-reference/vscode-ext/restartTSServerOnSave.mdx @@ -0,0 +1,14 @@ +| Type | Default Value | +| --- | --- | +| `boolean` | `true` | + +- `true`: Automatically restarts the TypeScript Language Server in VSCode when the BAML extension generates the TypeScript `baml_client` files. This is a workaround for VSCode's issues with recognizing newly added directories and files in the TypeScript Language Server. No-op if not generating TypeScript files. +- `false`: Does not automatically restart the TypeScript Server. You may need to manually reload the TS server to ensure it recognizes the new types. + +## Usage + +```json +{ + "baml.restartTSServerOnSave": true +} +``` \ No newline at end of file diff --git a/fern/README.md b/fern/README.md new file mode 100644 index 000000000..2dd842a53 --- /dev/null +++ b/fern/README.md @@ -0,0 +1,22 @@ +# Fern Configuration + +View the documentation [here](https://boundary.docs.buildwithfern.com). + +## Updating your Docs + +### Local Development server + +To run a local development server with hot-reloading you can run the following command + +```sh +fern docs dev +``` + +### Hosted URL + +Documentation is automatically updated when you push to main via the `fern generate` command. + +```sh +npm install -g fern-api # only required once +fern generate --docs +``` diff --git a/fern/assets/comparisons/prompt_view.gif b/fern/assets/comparisons/prompt_view.gif new file mode 100644 index 000000000..637fe2b81 Binary files /dev/null and b/fern/assets/comparisons/prompt_view.gif differ diff --git a/fern/assets/comparisons/resume_change1.png b/fern/assets/comparisons/resume_change1.png new file mode 100644 index 000000000..d6ea4e8e5 Binary files /dev/null and b/fern/assets/comparisons/resume_change1.png differ diff --git a/fern/assets/comparisons/resume_error.png b/fern/assets/comparisons/resume_error.png new file mode 100644 index 000000000..a978af804 Binary files /dev/null and b/fern/assets/comparisons/resume_error.png differ diff --git a/fern/assets/comparisons/resume_playground1.png b/fern/assets/comparisons/resume_playground1.png new file mode 100644 index 000000000..4efbfa68a Binary files /dev/null and b/fern/assets/comparisons/resume_playground1.png differ diff --git a/fern/assets/favicon.ico b/fern/assets/favicon.ico new file mode 100644 index 000000000..26297a778 Binary files /dev/null and b/fern/assets/favicon.ico differ diff --git a/fern/assets/languages/baml-to-py.png b/fern/assets/languages/baml-to-py.png new file mode 100644 index 000000000..f5b757560 Binary files /dev/null and b/fern/assets/languages/baml-to-py.png differ diff --git a/fern/assets/languages/baml-to-rb.png b/fern/assets/languages/baml-to-rb.png new file mode 100644 index 000000000..4fd57969d Binary files /dev/null and b/fern/assets/languages/baml-to-rb.png differ diff --git a/fern/assets/languages/baml-to-rest.png b/fern/assets/languages/baml-to-rest.png new file mode 100644 index 000000000..e72b64e3d Binary files /dev/null and b/fern/assets/languages/baml-to-rest.png differ diff --git a/fern/assets/languages/baml-to-ts.png b/fern/assets/languages/baml-to-ts.png new file mode 100644 index 000000000..2469248c9 Binary files /dev/null and b/fern/assets/languages/baml-to-ts.png differ diff --git a/fern/assets/open-sans-v17-all-charsets-700.woff2 b/fern/assets/open-sans-v17-all-charsets-700.woff2 new file mode 100644 index 000000000..421a1ab25 Binary files /dev/null and b/fern/assets/open-sans-v17-all-charsets-700.woff2 differ diff --git a/fern/assets/open-sans-v17-all-charsets-italic.woff2 b/fern/assets/open-sans-v17-all-charsets-italic.woff2 new file mode 100644 index 000000000..398b68a08 Binary files /dev/null and b/fern/assets/open-sans-v17-all-charsets-italic.woff2 differ diff --git a/fern/assets/open-sans-v17-all-charsets-regular.woff2 b/fern/assets/open-sans-v17-all-charsets-regular.woff2 new file mode 100644 index 000000000..8383e94c6 Binary files /dev/null and b/fern/assets/open-sans-v17-all-charsets-regular.woff2 differ diff --git a/fern/assets/studio/dashboard-test-pic.png b/fern/assets/studio/dashboard-test-pic.png new file mode 100644 index 000000000..d2d33dabb Binary files /dev/null and b/fern/assets/studio/dashboard-test-pic.png differ diff --git a/fern/assets/styles.css b/fern/assets/styles.css new file mode 100644 index 000000000..5df4e2e03 --- /dev/null +++ b/fern/assets/styles.css @@ -0,0 +1,96 @@ +.fern-sidebar-link-content { + padding-top: 0.4rem; + padding-bottom: 0.4rem; +} + +.fern-sidebar-link-container { + min-height: 26px; +} + +.fern-prose code { + background-color: #f8f8f8; + /* font-weight: 600; */ +} + +.fern-sidebar-link-container[data-state="active"] .fern-sidebar-link { + font-weight: 600; +} +.fern-header { + border: 0px; +} +.fern-header-tabs-list { + height: 28px; + font-size: 12px; +} +.fern-header-tabs { + height: 32px; + max-height: 36px; +} + +.fern-header-tab-button > * { + font-size: 12px; + /* color */ + color: #333333; +} + +/* .fern-sidebar-container { + color: #000000; +} */ + +body .p { + font-size: 1rem !important; +} + +p { + color: #333333; +} + +fern-docs { + color: #333333; +} + +ul { + color: #333333; +} + +fern-button-text { + color: #ffffff; +} + +h1, +h2, +h3, +h4, +h5, +h6 { + color: #333333; +} + +.fern-sidebar-link-container:not([data-state="active"]) .fern-sidebar-link { + color: #333333; +} + +span:not(a span) { + color: #333333; +} + +:root { + --accent-aaa: 139, 92, 246; +} + +.fern-mdx-link { + text-decoration-color: rgba(167, 139, 250, 1); +} + +/* .fern-sidebar-group li { + margin-top: 0px !important; +} */ + +/* .group\/sidebar { + z-index: 1000; + padding-top: 20px; +} */ + +/* .fern-search-bar { + margin-bottom: 10px; +} */ diff --git a/fern/assets/terminal-logs/log_message.png b/fern/assets/terminal-logs/log_message.png new file mode 100644 index 000000000..186faa382 Binary files /dev/null and b/fern/assets/terminal-logs/log_message.png differ diff --git a/fern/assets/vscode/baml-client.png b/fern/assets/vscode/baml-client.png new file mode 100644 index 000000000..82a549e27 Binary files /dev/null and b/fern/assets/vscode/baml-client.png differ diff --git a/fern/assets/vscode/bedrock-playground.png b/fern/assets/vscode/bedrock-playground.png new file mode 100644 index 000000000..876e5af67 Binary files /dev/null and b/fern/assets/vscode/bedrock-playground.png differ diff --git a/fern/assets/vscode/code-lens.png b/fern/assets/vscode/code-lens.png new file mode 100644 index 000000000..c397169f9 Binary files /dev/null and b/fern/assets/vscode/code-lens.png differ diff --git a/fern/assets/vscode/curl-preview.png b/fern/assets/vscode/curl-preview.png new file mode 100644 index 000000000..8d10335ad Binary files /dev/null and b/fern/assets/vscode/curl-preview.png differ diff --git a/fern/assets/vscode/dev-tools.png b/fern/assets/vscode/dev-tools.png new file mode 100644 index 000000000..9fbab7c0e Binary files /dev/null and b/fern/assets/vscode/dev-tools.png differ diff --git a/fern/assets/vscode/extension-status.png b/fern/assets/vscode/extension-status.png new file mode 100644 index 000000000..05041a2a7 Binary files /dev/null and b/fern/assets/vscode/extension-status.png differ diff --git a/fern/assets/vscode/extract-resume-prompt-preview.png b/fern/assets/vscode/extract-resume-prompt-preview.png new file mode 100644 index 000000000..2afcf6346 Binary files /dev/null and b/fern/assets/vscode/extract-resume-prompt-preview.png differ diff --git a/fern/assets/vscode/open-playground.png b/fern/assets/vscode/open-playground.png new file mode 100644 index 000000000..5645ca1d9 Binary files /dev/null and b/fern/assets/vscode/open-playground.png differ diff --git a/fern/assets/vscode/playground-preview.png b/fern/assets/vscode/playground-preview.png new file mode 100644 index 000000000..31a4c9f39 Binary files /dev/null and b/fern/assets/vscode/playground-preview.png differ diff --git a/fern/assets/vscode/test-case-buttons.png b/fern/assets/vscode/test-case-buttons.png new file mode 100644 index 000000000..91ffc7877 Binary files /dev/null and b/fern/assets/vscode/test-case-buttons.png differ diff --git a/fern/assets/vscode/test-cases.png b/fern/assets/vscode/test-cases.png new file mode 100644 index 000000000..20660f8d4 Binary files /dev/null and b/fern/assets/vscode/test-cases.png differ diff --git a/fern/assets/vscode/vscode-settings.png b/fern/assets/vscode/vscode-settings.png new file mode 100644 index 000000000..97c5c7490 Binary files /dev/null and b/fern/assets/vscode/vscode-settings.png differ diff --git a/fern/docs.yml b/fern/docs.yml new file mode 100644 index 000000000..00284ce2a --- /dev/null +++ b/fern/docs.yml @@ -0,0 +1,691 @@ +instances: + - url: https://boundary.docs.buildwithfern.com + # vv example custom domain configuration vv + custom-domain: docs.boundaryml.com + edit-this-page: + github: + owner: BoundaryML + repo: baml + branch: canary + +title: Boundary Documentation + +tabs: + home: + display-name: Home + icon: fa-solid fa-house + slug: home + guides: + display-name: Guide + icon: fa-solid fa-book + slug: guide + examples: + display-name: Examples + icon: fa-solid fa-grid-2 + reference: + display-name: BAML Reference + icon: fa-solid fa-code + slug: ref + # paid-tooling: + # display-name: Paid Offerings + # icon: fa-solid fa-dollar-sign + # slug: paid-tooling + playground: + display-name: Playground + icon: fa-solid fa-play + href: https://promptfiddle.com + changelog: + display-name: Changelog + icon: fa-regular fa-history + slug: changelog +navigation: + - tab: home + layout: + - page: Welcome + path: pages/welcome.mdx + - tab: guides + layout: + - section: Introduction + contents: + - page: What is BAML? + icon: fa-regular fa-question-circle + path: 01-guide/what-are-function-definitions.mdx + - page: What's the baml_src folder + icon: fa-regular fa-folder + path: 01-guide/what-is-baml_src.mdx + slug: baml_src + - page: What's baml_client + icon: fa-regular fa-folder-gear + path: 01-guide/what-is-baml_client.mdx + slug: baml_client + - section: "Installation: Editors" + contents: + - page: VSCode Extension + icon: fa-brands fa-microsoft + path: 01-guide/01-editors/vscode.mdx + - page: Cursor Extension + icon: fa-brands fa-microsoft + path: 01-guide/01-editors/cursor.mdx + - page: Others + icon: fa-brands fa-microsoft + path: 01-guide/01-editors/others.mdx + - section: "Installation: Language" + contents: + - page: Python + icon: fa-brands fa-python + path: 01-guide/02-languages/python.mdx + - page: Typescript + icon: fa-brands fa-js + path: 01-guide/02-languages/typescript.mdx + - page: Ruby + icon: fa-regular fa-gem + path: 01-guide/02-languages/ruby.mdx + - page: REST API (other languages) + icon: fa-regular fa-network-wired + path: 01-guide/02-languages/rest.mdx + + - page: NextJS + icon: fa-brands fa-react + path: 01-guide/08-integrations/nextjs.mdx + - section: Development + contents: + - page: Environment Variables + icon: fa-regular fa-cogs + path: 01-guide/03-development/environment-variables.mdx + - page: Terminal Logs + icon: fa-regular fa-file-lines + path: 01-guide/03-development/terminal-logs.mdx + - page: Upgrade BAML versions + icon: fa-regular fa-circle-arrow-up + path: 01-guide/03-development/upgrade-baml-versions.mdx + - section: Deploying + icon: fa-regular fa-rocket + contents: + - page: AWS + icon: fa-brands fa-aws + path: 01-guide/03-development/deploying/aws.mdx + - page: NextJs + icon: fa-brands fa-react + path: 01-guide/03-development/deploying/nextjs.mdx + - page: Docker + icon: fa-brands fa-docker + path: 01-guide/03-development/deploying/docker.mdx + - page: Docker (REST API) + icon: fa-brands fa-docker + path: 01-guide/03-development/deploying/openapi.mdx + - section: BAML Basics + contents: + - page: Prompting with BAML + icon: fa-solid fa-terminal + path: 01-guide/04-baml-basics/my-first-function.mdx + - page: Switching LLMs + icon: fa-regular fa-random + path: 01-guide/04-baml-basics/switching-llms.mdx + - page: Testing functions + icon: fa-regular fa-vial + path: 01-guide/04-baml-basics/testing-functions.mdx + - page: Streaming + icon: fa-regular fa-faucet + path: 01-guide/04-baml-basics/streaming.mdx + - page: Multi-Modal (Images / Audio) + icon: fa-regular fa-image + path: 01-guide/04-baml-basics/multi-modal.mdx + - page: Error Handling + icon: fa-regular fa-triangle-exclamation + path: 01-guide/04-baml-basics/error-handling.mdx + - page: Concurrent Calls + icon: fa-regular fa-clock-rotate-left + path: 01-guide/04-baml-basics/concurrent-calls.mdx + + - section: BAML Advanced + contents: + - page: LLM Client Registry + icon: fa-regular fa-gears + path: 01-guide/05-baml-advanced/client-registry.mdx + - page: Dynamic / Runtime Types + icon: fa-solid fa-person-running + path: 01-guide/05-baml-advanced/dynamic-types.mdx + - page: Reusing Prompt Snippets + icon: fa-regular fa-repeat + path: 01-guide/05-baml-advanced/reusing-prompt-snippets.mdx + - page: Prompt Caching / Message Role Metadata + icon: fa-regular fa-database + path: 01-guide/05-baml-advanced/prompt-caching.mdx + - page: Validations + icon: fa-regular fa-check-circle + path: 01-guide/05-baml-advanced/validations.mdx + - section: Observability + contents: + - page: Tracking Usage + icon: fa-regular fa-bar-chart + path: 01-guide/07-observability/studio.mdx + - section: Comparisons + contents: + - page: BAML vs Marvin + icon: fa-solid fa-magnifying-glass + path: 01-guide/09-comparisons/marvin.mdx + - page: BAML vs Pydantic + icon: fa-solid fa-magnifying-glass + path: 01-guide/09-comparisons/pydantic.mdx + - page: Contact + icon: fa-regular fa-envelope + path: 01-guide/contact.mdx + - tab: examples + layout: + - page: Interactive Examples + icon: fa-solid fa-play + path: 02-examples/interactive-examples.mdx + - section: Prompt Engineering + contents: + - page: Reducing Hallucinations + icon: fa-regular fa-person-fairy + path: 01-guide/06-prompt-engineering/hallucinations.mdx + - page: Chat + icon: fa-regular fa-comments + path: 01-guide/06-prompt-engineering/chat-history.mdx + - page: Tools / Function Calling + icon: fa-regular fa-wrench + path: 01-guide/06-prompt-engineering/tools.mdx + # - page: Zero shot prompting + # icon: fa-regular fa-bullseye + # path: 01-guide/introduction.mdx + # - page: Few shot prompting + # icon: fa-regular fa-dice-three + # path: 01-guide/introduction.mdx + - page: Chain of Thought + icon: fa-solid fa-brain + path: 01-guide/06-prompt-engineering/chain-of-thought.mdx + - page: Symbol Tuning + icon: fa-regular fa-adjust + path: 01-guide/06-prompt-engineering/symbol-tuning.mdx + # - page: Self-Consistency Prompting + # icon: fa-regular fa-sync-alt + # path: 01-guide/introduction.mdx + # - page: Prompt Chaining + # icon: fa-regular fa-link + # path: 01-guide/introduction.mdx + - page: Retrieval Augmented Generation + icon: fa-regular fa-database + path: 01-guide/introduction.mdx + # - section: Python + # icon: fa-brands fa-python + # contents: + # - page: Hello World + # icon: fa-regular fa-rocket + # path: 01-guide/introduction.mdx + # - page: FastAPI + RAG + streaming + # icon: fa-regular fa-database + # path: 01-guide/introduction.mdx + # - page: Flask + ChatBot + # icon: fa-regular fa-comments + # path: 01-guide/introduction.mdx + # - page: Reflex + Reciept Parsing + # icon: fa-regular fa-receipt + # path: 01-guide/introduction.mdx + # - page: Parsing PDFs + # icon: fa-regular fa-file-pdf + # path: 01-guide/introduction.mdx + # - page: Web scraping with Selenium + # icon: fa-regular fa-globe + # path: 01-guide/introduction.mdx + # - section: Typescript + # icon: fa-brands fa-js + # contents: + # - page: Hello World + # icon: fa-regular fa-rocket + # path: 01-guide/introduction.mdx + # - page: NextJS + RAG + streaming + # icon: fa-regular fa-database + # path: 01-guide/introduction.mdx + # - page: React + Express + ChatBot + # icon: fa-regular fa-comments + # path: 01-guide/introduction.mdx + # - page: NextJS parsing PDFs + # icon: fa-regular fa-file-pdf + # path: 01-guide/introduction.mdx + # - page: Web scraping with Playwright + # icon: fa-regular fa-globe + # path: 01-guide/introduction.mdx + # - section: Ruby + # icon: fa-solid fa-gem + # contents: + # - page: Hello world + # icon: fa-regular fa-rocket + # path: 01-guide/introduction.mdx + # - page: Rails + RAG + streaming + # icon: fa-regular fa-database + # path: 01-guide/introduction.mdx + # - section: Rest API Hello World + # icon: fa-regular fa-network-wired + # contents: + # - page: Ruby + # icon: fa-solid fa-gem + # path: 01-guide/introduction.mdx + # - page: Java + # icon: fa-brands fa-java + # path: 01-guide/introduction.mdx + # - page: Go + # icon: fa-brands fa-golang + # path: 01-guide/introduction.mdx + # - page: C++ + # icon: fa-brands fa-cuttlefish + # path: 01-guide/introduction.mdx + # - page: Rust + # icon: fa-brands fa-rust + # path: 01-guide/introduction.mdx + # - page: PHP + # icon: fa-brands fa-php + # path: 01-guide/introduction.mdx + - tab: reference + layout: + # - page: Changelog + # icon: fa-regular fa-history + # path: ./Changelog.mdx + + # - section: Boundary Tools + # contents: + # - api: Document Extraction API + # icon: fa-regular fa-dollar-sign + # slug: reference/extract + # display-errors: true + # layout: + # - POST /extract + # - page: Extraction Examples + # path: reference/extract/examples.mdx + # slug: examples + - page: Overview + path: 03-reference/overview.mdx + + - section: baml-cli + contents: + - page: init + path: 03-reference/baml-cli/init.mdx + - page: generate + path: 03-reference/baml-cli/generate.mdx + - page: serve + path: 03-reference/baml-cli/serve.mdx + - page: dev + path: 03-reference/baml-cli/dev.mdx + - section: Language Reference + slug: baml + contents: + - section: General BAML Syntax + contents: + - page: comments + path: 03-reference/baml/comments.mdx + - page: Enviornment Variables + path: 03-reference/baml/env-vars.mdx + - page: string + path: 03-reference/baml/string.mdx + - page: "int / float" + path: 03-reference/baml/int-float.mdx + - page: bool + path: 03-reference/baml/bool.mdx + - page: array (list) + path: 03-reference/baml/array.mdx + - page: map (dictionary) + path: 03-reference/baml/map.mdx + - page: Types + path: 03-reference/baml/types.mdx + - page: function + path: 03-reference/baml/function.mdx + - page: test + path: 03-reference/baml/test.mdx + - page: template_string + path: 03-reference/baml/template_string.mdx + - page: "client" + path: 03-reference/baml/client-llm.mdx + - page: class + path: 03-reference/baml/class.mdx + - page: enum + path: 03-reference/baml/enum.mdx + - page: generator + path: 03-reference/generator.mdx + - section: LLM Client Providers + contents: + - page: "AWS Bedrock" + path: 03-reference/baml/clients/providers/aws-bedrock.mdx + - page: "Anthropic" + path: 03-reference/baml/clients/providers/anthropic.mdx + - page: "Google AI Studio" + path: 03-reference/baml/clients/providers/google-ai.mdx + - page: "OpenAI" + path: 03-reference/baml/clients/providers/openai.mdx + - page: "OpenAI from Azure" + path: 03-reference/baml/clients/providers/azure.mdx + - page: "Vertex" + path: 03-reference/baml/clients/providers/vertex.mdx + - page: "openai-generic" + path: 03-reference/baml/clients/providers/openai-generic.mdx + - page: "openai-generic: Groq" + path: 03-reference/baml/clients/providers/groq.mdx + - page: "openai-generic: Hugging Face" + path: 03-reference/baml/clients/providers/huggingface.mdx + - page: "openai-generic: Keywords AI" + path: 03-reference/baml/clients/providers/keywordsai.mdx + - page: "openai-generic: LM Studio" + path: 03-reference/baml/clients/providers/lmstudio.mdx + - page: "openai-generic: Ollama" + path: 03-reference/baml/clients/providers/ollama.mdx + - page: "openai-generic: OpenRouter" + path: 03-reference/baml/clients/providers/openrouter.mdx + - page: "openai-generic: TogetherAI" + path: 03-reference/baml/clients/providers/together.mdx + - page: "openai-generic: Unify AI" + path: 03-reference/baml/clients/providers/unify.mdx + - page: "openai-generic: vLLM" + path: 03-reference/baml/clients/providers/vllm.mdx + - section: LLM Client Strategies + contents: + - page: "Retry Policy" + path: 03-reference/baml/clients/strategy/retry.mdx + - page: "Fallback" + path: 03-reference/baml/clients/strategy/fallback.mdx + - page: "Round Robin" + path: 03-reference/baml/clients/strategy/round-robin.mdx + - section: baml_client + contents: + - page: TypeBuilder + path: 03-reference/baml_client/typebuilder.mdx + - page: ClientRegistry + path: 01-guide/05-baml-advanced/client-registry.mdx + + - section: Prompt Syntax + contents: + - page: What is jinja? + path: 03-reference/baml/prompt-syntax/what-is-jinja.mdx + - page: "ctx.output_format" + path: 03-reference/baml/prompt-syntax/output-format.mdx + - page: "ctx.client" + path: 03-reference/baml/prompt-syntax/ctx.mdx + - page: "_.role" + path: 03-reference/baml/prompt-syntax/role.mdx + - page: Variables + path: 03-reference/baml/prompt-syntax/variables.mdx + - page: Conditionals + path: 03-reference/baml/prompt-syntax/conditionals.mdx + - page: Loops + path: 03-reference/baml/prompt-syntax/loops.mdx + - section: Editor Extension Settings + contents: + - page: baml.cliPath + path: 03-reference/vscode-ext/clipath.mdx + - page: baml.generateCodeOnSave + path: 03-reference/vscode-ext/generateCodeOnSave.mdx + - page: baml.restartTSServerOnSave + path: 03-reference/vscode-ext/restartTSServerOnSave.mdx + - section: Boundary Extraction API + contents: + - api: API Reference + layout: + - POST /extract + - page: Extraction Examples + path: 03-reference/extract/examples.mdx + slug: extract + summary: 03-reference/extract/summary.mdx + display-errors: true + # - page: Advanced + # path: 01-guide/introduction.mdx + + # - section: "baml_client: Python" + # contents: + # - page: TypeBuilder + # path: 01-guide/introduction.mdx + # - page: ClientRegistry + # path: 01-guide/introduction.mdx + # - page: BamlError + # path: 01-guide/introduction.mdx + # - page: BamlImagePy + # path: 01-guide/introduction.mdx + # - page: BamlAudioPy + # path: 01-guide/introduction.mdx + # - page: b + # path: 01-guide/introduction.mdx + # - page: async_client + # path: 01-guide/introduction.mdx + # - page: sync_client + # path: 01-guide/introduction.mdx + # - page: reset_baml_env_vars + # path: 01-guide/introduction.mdx + # - page: trace + # path: 01-guide/introduction.mdx + # - page: set_tags + # path: 01-guide/introduction.mdx + # - section: "baml_client: Typescript" + # contents: + # - page: TypeBuilder + # path: 01-guide/introduction.mdx + # - page: ClientRegistry + # path: 01-guide/introduction.mdx + # - page: BamlError + # path: 01-guide/introduction.mdx + # - page: BamlImageTs + # path: 01-guide/introduction.mdx + # - page: BamlAudioTs + # path: 01-guide/introduction.mdx + # - page: b + # path: 01-guide/introduction.mdx + # - page: async_client + # path: 01-guide/introduction.mdx + # - page: sync_client + # path: 01-guide/introduction.mdx + # - page: resetBamlEnvVars + # path: 01-guide/introduction.mdx + # - page: trace_async + # path: 01-guide/introduction.mdx + # - page: trace_sync + # path: 01-guide/introduction.mdx + # - page: set_tags + # path: 01-guide/introduction.mdx + - tab: playground + - tab: changelog + layout: + - page: Changelog + icon: fa-regular fa-history + path: pages/changelog.mdx + + # - tab: paid-tooling + # layout: + # - section: Extract API + # contents: + # - api: API Reference + # slug: reference/extract + # summary: reference/extract/summary.mdx + # display-errors: true + # layout: + # - POST /extract + # - page: Extraction Examples + # path: reference/extract/examples.mdx + # slug: examples + +navbar-links: + - type: github + value: https://github.com/boundaryml/baml + - type: filled + text: Help on Discord + url: https://discord.gg/BTNBeXGuaS + +colors: + accentPrimary: + light: "#a78bfa" + background: + light: "#ffffff" + sidebar-background: + light: "#fefefe" +logo: + light: assets/favicon.ico + height: 40 +favicon: assets/favicon.ico +css: assets/styles.css +layout: + page-width: full + disable-header: false + searchbar-placement: header + header-height: 54px + tabs-placement: header + +typography: + bodyFont: + name: OpenSans + paths: + - path: ./assets/open-sans-v17-all-charsets-regular.woff2 + weight: "400" + - path: ./assets/open-sans-v17-all-charsets-italic.woff2 + style: italic + headingsFont: + name: OpenSans + path: ./assets/open-sans-v17-all-charsets-700.woff2 + weight: "600" + +# Legacy docs +redirects: + - source: "/docs/get-started/what-is-baml" + destination: "/guide/introduction/what-is-baml" + - source: "/docs/get-started/interactive-demos" + destination: "/examples/interactive-examples" + - source: "/docs/get-started/quickstart/python" + destination: "/guide/installation-language/python" + - source: "/docs/get-started/quickstart/typescript" + destination: "/guide/installation-language/typescript" + - source: "/docs/get-started/quickstart/ruby" + destination: "/guide/installation-language/ruby" + - source: "/docs/get-started/quickstart/openapi" + destination: "/guide/installation-language/rest-api-other-languages" + - source: "/docs/get-started/quickstart/editors-vscode" + destination: "/guide/installation-editors/vs-code-extension" + - source: "/docs/get-started/quickstart/editors-other" + destination: "/guide/installation-editors/others" + - source: "/docs/get-started/debugging/vscode-playground" + destination: "/guide/development/terminal-logs" + - source: "/docs/get-started/debugging/enable-logging" + destination: "/guide/development/terminal-logs" + - source: "/baml/get-started/debugging/exception-handling" + destination: "/guide/baml-basics/error-handling" + - source: "/docs/get-started/deploying/docker" + destination: "/docs/get-started/deploying/docker" + - source: "/docs/get-started/deploying/nextjs" + destination: "/docs/get-started/deploying/nextjs" + - source: "/docs/get-started/deploying/aws" + destination: "/docs/get-started/deploying/aws" + - source: "/docs/get-started/deploying/openapi" + destination: "/docs/get-started/deploying/openapi" + - source: "/docs/snippets/syntax/comments" + destination: "/ref/baml/general-baml-syntax/comments" + - source: "/docs/snippets/syntax/strings" + destination: "/ref/baml/general-baml-syntax/string" + - source: "/docs/snippets/syntax/lists" + destination: "/ref/baml/general-baml-syntax/array-list" + - source: "/docs/snippets/syntax/dictionaries" + destination: "/ref/baml/general-baml-syntax/map-dictionary" + - source: "/docs/snippets/supported-types" + destination: "/ref/baml/types" + - source: "/docs/snippets/clients/overview" + destination: "/ref/baml/client-llm" + - source: "/docs/snippets/clients/providers/anthropic" + destination: "/ref/llm-client-providers/anthropic" + - source: "/docs/snippets/clients/providers/aws-bedrock" + destination: "/ref/llm-client-providers/aws-bedrock" + - source: "/docs/snippets/clients/providers/azure" + destination: "/ref/llm-client-providers/open-ai-from-azure" + - source: "/docs/snippets/clients/providers/gemini" + destination: "/ref/llm-client-providers/google-ai-studio" + - source: "/docs/snippets/clients/providers/groq" + destination: "/ref/llm-client-providers/openai-generic-groq" + - source: "/docs/snippets/clients/providers/huggingface" + destination: "/ref/llm-client-providers/openai-generic-hugging-face" + - source: "/docs/snippets/clients/providers/ollama" + destination: "/ref/llm-client-providers/openai-generic-ollama" + - source: "/docs/snippets/clients/providers/openai" + destination: "/ref/llm-client-providers/open-ai" + - source: "/docs/snippets/clients/providers/openai-generic" + destination: "/ref/llm-client-providers/openai-generic" + - source: "/docs/snippets/clients/providers/openrouter" + destination: "/ref/llm-client-providers/openai-generic-open-router" + - source: "/docs/snippets/clients/providers/together" + destination: "/ref/llm-client-providers/openai-generic-together-ai" + - source: "/docs/snippets/clients/providers/vertex" + destination: "/ref/llm-client-providers/vertex" + - source: "/docs/snippets/clients/providers/vllm" + destination: "/ref/llm-client-providers/openai-generic-v-llm" + - source: "/docs/snippets/clients/providers/lmstudio" + destination: "/ref/llm-client-providers/openai-generic-lm-studio" + - source: "/docs/snippets/clients/providers/keywordsai" + destination: "/ref/llm-client-providers/openai-generic-keywords-ai" + - source: "/docs/snippets/clients/fallback" + destination: "/ref/llm-client-strategies/fallback" + - source: "/docs/snippets/clients/round-robin" + destination: "/ref/llm-client-strategies/round-robin" + - source: "/docs/snippets/clients/retry" + destination: "/ref/llm-client-strategies/retry-policy" + - source: "/docs/snippets/functions/overview" + destination: "/ref/baml/function" + - source: "/docs/snippets/functions/classification" + destination: "/guide/baml-basics/prompting-with-baml" + - source: "/docs/snippets/functions/extraction" + destination: "/guide/baml-basics/prompting-with-baml" + - source: "/docs/snippets/functions/function-calling" + destination: "/examples/prompt-engineering/tools-function-calling" + - source: "/docs/snippets/class" + destination: "/ref/baml/class" + - source: "/docs/snippets/enum" + destination: "/ref/baml/enum" + - source: "/docs/snippets/prompt-syntax/what-is-jinja" + destination: "/ref/prompt-syntax/what-is-jinja" + - source: "/docs/snippets/prompt-syntax/output-format" + destination: "/ref/prompt-syntax/ctx-output-format" + - source: "/docs/snippets/prompt-syntax/roles" + destination: "/ref/prompt-syntax/role" + - source: "/docs/snippets/prompt-syntax/variables" + destination: "/ref/prompt-syntax/variables" + - source: "/docs/snippets/prompt-syntax/conditionals" + destination: "/ref/prompt-syntax/conditionals" + - source: "/docs/snippets/prompt-syntax/loops" + destination: "/ref/prompt-syntax/loops" + - source: "/docs/snippets/prompt-syntax/comments" + destination: "/ref/baml/general-baml-syntax/comments" + - source: "/docs/snippets/prompt-syntax/ctx" + destination: "/ref/prompt-syntax/ctx-client" + - source: "/docs/snippets/template-string" + destination: "/ref/baml/template-string" + - source: "/docs/snippets/test-cases" + destination: "/ref/baml/test" + - source: "/docs/calling-baml/dynamic-types" + destination: "/guide/baml-advanced/dynamic-runtime-types" + - source: "/docs/calling-baml/client-registry" + destination: "/guide/baml-advanced/llm-client-registry" + - source: "/docs/calling-baml/checks-and-asserts" + destination: "/guide/baml-basics/error-handling" + - source: "/docs/calling-baml/generate-baml-client" + destination: "/ref/baml-client/type-builder" + - source: "/docs/calling-baml/set-env-vars" + destination: "/docs/guide/development/environment-variables" + - source: "/docs/calling-baml/calling-functions" + destination: "/guide/baml-basics/prompting-and-calling-ll-ms-with-baml" + - source: "/docs/calling-baml/streaming" + destination: "/guide/baml-basics/streaming" + - source: "/docs/calling-baml/concurrent-calls" + destination: "/guide/baml-basics/concurrent-calls" + - source: "/docs/calling-baml/multi-modal" + destination: "/guide/baml-basics/multi-modal" + - source: "/docs/baml-nextjs/baml-nextjs" + destination: "/docs/baml-nextjs/baml-nextjs" + - source: "/docs/observability/overview" + destination: "/guide/observability/tracking-usage" + - source: "/docs/observability/tracing-tagging" + destination: "/guide/observability/tracking-usage" + - source: "/docs/comparisons/marvin" + destination: "/guide/comparisons/baml-vs-marvin" + - source: "/docs/comparisons/pydantic" + destination: "/guide/comparisons/baml-vs-pydantic" + - source: "/contact" + destination: "/guide/contact" + - source: "/docs/reference/env-vars" + destination: "/docs/guide/development/environment-variables" + - source: "/docs/incidents/2024-07-10-ssrf-issue-in-fiddle-proxy" + destination: "/changelog/changelog" + - source: "/document-extraction-api/overview/docs/api" + destination: "/ref/boundary-extraction-api/extract" + - source: "/document-extraction-api/overview/docs/api/extract-data" + destination: "/ref/boundary-extraction-api/extract" + - source: "/document-extraction-api/overview/docs/api/extraction-examples" + destination: "/ref/boundary-extraction-api/extract/examples" diff --git a/fern/favicon.png b/fern/favicon.png new file mode 100644 index 000000000..be0d22192 Binary files /dev/null and b/fern/favicon.png differ diff --git a/fern/fern.config.json b/fern/fern.config.json new file mode 100644 index 000000000..a517020d8 --- /dev/null +++ b/fern/fern.config.json @@ -0,0 +1,4 @@ +{ + "organization": "boundary", + "version": "*" +} diff --git a/fern/generators.yml b/fern/generators.yml new file mode 100644 index 000000000..1d32d07e0 --- /dev/null +++ b/fern/generators.yml @@ -0,0 +1,2 @@ +api: + path: ./openapi/openapi.yaml diff --git a/fern/openapi/openapi.yaml b/fern/openapi/openapi.yaml new file mode 100644 index 000000000..4df347894 --- /dev/null +++ b/fern/openapi/openapi.yaml @@ -0,0 +1,182 @@ +openapi: 3.0.3 +info: + title: BoundaryML Extract API + version: "1.0" +servers: + - url: https://api2.boundaryml.com/v3 +paths: + /extract: + post: + summary: Extract + description: | + Upload one or more files along with a prompt to extract data. The API processes the files based on the prompt and returns the extracted information. + + A PDF may generate an array of many extracted JSON blobs, 1 per page for example. + + operationId: extractData + security: + - BearerAuth: [] + requestBody: + required: true + content: + multipart/form-data: + schema: + type: object + properties: + files: + type: array + items: + type: string + format: binary + description: One or more files to be processed. + prompt: + type: string + description: Instruction for data extraction. Like "focus on the colors of the images in this document" or "only focus on extracting addresses" + required: + - files + encoding: + files: + style: form + explode: true + examples: + ExampleRequest: + summary: Example request with files and prompt + value: + files: + - "@path/to/your/file1.pdf" + - "@path/to/your/file2.png" + prompt: "Please extract the text content." + responses: + "200": + description: Successful Response + content: + application/json: + schema: + $ref: "#/components/schemas/ExtractResponse" + "400": + description: Invalid Request Parameters + content: + application/json: + schema: + $ref: "#/components/schemas/ErrorResponse" + "415": + description: Unsupported Media Type + content: + application/json: + schema: + $ref: "#/components/schemas/ErrorResponse" + "422": + description: Validation Error + content: + application/json: + schema: + $ref: "#/components/schemas/HTTPValidationError" + "500": + description: Internal Server Error + content: + application/json: + schema: + $ref: "#/components/schemas/ErrorResponse" +components: + securitySchemes: + BearerAuth: + type: http + scheme: bearer + bearerFormat: JWT + schemas: + ExtractResponse: + type: object + properties: + extractions: + type: array + items: + $ref: "#/components/schemas/Extraction" + usage: + $ref: "#/components/schemas/Usage" + request_id: + type: string + description: Unique identifier for the request. + required: + - extractions + - usage + - request_id + Extraction: + type: object + properties: + source: + $ref: "#/components/schemas/Source" + output: + type: object + description: Extracted data from the file, in JSON format. + required: + - source + - output + Source: + type: object + properties: + type: + type: string + description: Media type of the file. + name: + type: string + description: Name of the file. + page: + type: integer + description: Page number if applicable. + required: + - type + Usage: + type: object + description: Usage statistics for the request. A request goes through the BoundaryML pipeline, where documents can be converted into images. In the process, the number of characters consumed, produced, and the number of megapixels consumed are tracked. + properties: + consumed_chars: + type: integer + description: Number of characters processed. + produced_chars: + type: integer + description: Number of characters produced. + consumed_megapixels: + type: number + description: Number of megapixels processed. + required: + - consumed_chars + - produced_chars + - consumed_megapixels + ErrorResponse: + type: object + properties: + error: + type: string + description: Error message detailing the issue. + required: + - error + HTTPValidationError: + type: object + title: HTTP Validation Error + properties: + detail: + type: array + title: Detail + items: + $ref: "#/components/schemas/ValidationError" + ValidationError: + type: object + title: Validation Error + required: + - loc + - msg + - type + properties: + loc: + type: array + title: Location + items: + anyOf: + - type: string + - type: integer + msg: + type: string + title: Message + type: + type: string + title: Error Type diff --git a/fern/package.json b/fern/package.json new file mode 100644 index 000000000..6de00bfa2 --- /dev/null +++ b/fern/package.json @@ -0,0 +1,9 @@ +{ + "scripts": { + "dev": "npx fern docs dev", + "preview": "npx fern generate --preview --docs" + }, + "devDependencies": { + "fern-api": "^0.31.24" + } +} diff --git a/fern/pages/changelog.mdx b/fern/pages/changelog.mdx new file mode 100644 index 000000000..d4df93191 --- /dev/null +++ b/fern/pages/changelog.mdx @@ -0,0 +1,564 @@ +--- +title: Changelog +--- + +All notable changes to this project will be documented in this file. See [conventional commits](https://www.conventionalcommits.org/) for commit guidelines. + + +
+## [0.62.0](https://github.com/boundaryml/baml/compare/0.61.1..0.62.0) - 2024-10-21 + +### Features + +- Support serializing/deserializing `baml_py.Image`, `baml_py.Audio` for pydantic (#1062) - ([11cb699](https://github.com/boundaryml/baml/commit/11cb69903dce1ae348c68f88a82b4731da3977a7)) - Samuel Lijin +- Support rendering input classes with aliases (#1045) - ([3824cda](https://github.com/boundaryml/baml/commit/3824cda75524105f3401e5c7e4c21e604d639f76)) - aaronvg +- Add unstable_internal_repr on FunctionResult in python (#1068) - ([00082e8](https://github.com/boundaryml/baml/commit/00082e8b941d3648ec499215d2c38091f36db944)) - hellovai +- Add literal support for type_builder (#1069) - ([c0085d9](https://github.com/boundaryml/baml/commit/c0085d908cbf8696623fd70f49de5ca8325de06c)) - hellovai + +### Bug Fixes + +- Surface errors in fallbacks containing only erroneous clients (#1061) - ([b69ef79](https://github.com/boundaryml/baml/commit/b69ef79542ec818b8779f9710dad65d33166c862)) - Greg Hale +- Fix parser so that we are able to correctly detect sequences of empty strings. (#1048) - ([977e277](https://github.com/boundaryml/baml/commit/977e2776119a6f1e79f29cbe596b1c31697becb5)) - hellovai +- Make substring match algorithm case insensitive (#1056) - ([fa2c477](https://github.com/boundaryml/baml/commit/fa2c4770791297a7a37a3f0c837ede4bb709f0ef)) - Antonio Sarosi +- Fix vertex-ai citation data being optional (#1058) - ([5eae0a7](https://github.com/boundaryml/baml/commit/5eae0a73be6cc8286ce045185537aeed0b9feb7d)) - aaronvg +- Fix bug to correctly cast to pydantic types in ambiguous scenarios where BAML knows better (#1059) - ([830b0cb](https://github.com/boundaryml/baml/commit/830b0cb194b99fa6f019928e7466dcf3e3992596)) - hellovai +- Parser: Prefer case sensitive match over case insensitive (#1063) - ([cd6b141](https://github.com/boundaryml/baml/commit/cd6b141020ec8dfd2514c82ffffaebc5678a025b)) - Antonio Sarosi +- Only popup the vscode env var dialog once (#1066) - ([1951474](https://github.com/boundaryml/baml/commit/19514745cfc8efeb8bda0be655e0fa2f216e4b29)) - aaronvg + +### Documentation + +- Docs for literal types (#1030) - ([55e5964](https://github.com/boundaryml/baml/commit/55e596419055c8da52b841b9ecbf16e328bc1033)) - Antonio Sarosi +- Contribution guide (#1055) - ([f09d943](https://github.com/boundaryml/baml/commit/f09d9432d95c876f5e63f3abdb47a40417c5c45a)) - aaronvg + +### Misc + +- Fix VSCode metrics (#1044) - ([a131336](https://github.com/boundaryml/baml/commit/a13133656e1610cac9a92aa4b4459c78340c7304)) - hellovai +- Add more test cases for unquoted strings in objects (#1054) - ([2d1b700](https://github.com/boundaryml/baml/commit/2d1b700e82604e444d904cfeb67f46ced97153a5)) - hellovai +
+ +## [0.61.1](https://github.com/boundaryml/baml/compare/0.61.0..0.61.1) - 2024-10-15 + +### Bug Fixes + +- add musl to the ts release artifacts (#1042) - ([e74f3e9](https://github.com/boundaryml/baml/commit/e74f3e90489a403e38b39cc694d30d038ad38b8d)) - Samuel Lijin + +
+ +## [0.61.0](https://github.com/boundaryml/baml/compare/0.60.0..0.61.0) - 2024-10-14 + +### Features + +- Implement literal types (#978) - ([9e7431f](https://github.com/boundaryml/baml/commit/9e7431f43b74d4428e6a20b9aa3a1e93768ff905)) - Antonio Sarosi +- allow installing the TS library on node-alpine (#1029) - ([1c37a0d](https://github.com/boundaryml/baml/commit/1c37a0d71d921d13f05340ff6727255ba6074152)) - Samuel Lijin +- Add WYSIWYG UI (Swagger UI) to baml-cli dev (#1019) - ([0c73cab](https://github.com/boundaryml/baml/commit/0c73cab3d6ac3bbb04cc898ac102900ca9b17f86)) - Greg Hale +- Suppress streaming for Numbers (#1032) - ([3f4621b](https://github.com/boundaryml/baml/commit/3f4621b36555062312aabd9ba8435b965ba8fd92)) - Greg Hale + +### Bug Fixes + +- Add limit on connection pool to prevent stalling issues in pyo3 and other ffi boundaries (#1027) - ([eb90e62](https://github.com/boundaryml/baml/commit/eb90e62ffe21109e0da1bd74439d36bb37246ec3)) - hellovai +- Update docs (#1025) - ([2dd1bb6](https://github.com/boundaryml/baml/commit/2dd1bb6cf743c20af53d7147db8a4573de9cdbe0)) - Farookh Zaheer Siddiqui +- Fix parsing for streaming of objects more stable (#1031) - ([8aa9c00](https://github.com/boundaryml/baml/commit/8aa9c00b8f26a8c30178ff25aecc1c3b47b6696e)) - hellovai +- Fix python BamlValidationError type (#1036) - ([59a9510](https://github.com/boundaryml/baml/commit/59a9510c9d2c1216df01b0701cc23afb02e3f700)) - aaronvg + +### Miscellaneous + +- Popup settings dialog when no env vars set (#1033) - ([b9fa52a](https://github.com/boundaryml/baml/commit/b9fa52aea8686f8095878e7f210c2d937b533c63)) - aaronvg +- Bump version to 0.61.0 - ([ca2242b](https://github.com/boundaryml/baml/commit/ca2242b26214699268fda9e9ac07338c6491026d)) - Aaron Villalpando + +## [0.60.0](https://github.com/boundaryml/baml/compare/0.59.0..0.60.0) - 2024-10-09 + +### Miscellaneous Chores + +- update Dockerfile (#1017) - ([51539b7](https://github.com/boundaryml/baml/commit/51539b7b5778d6a3e6619698d2033d4f66f15d27)) - Ikko Eltociear Ashimine +- Revert "feat: add a WYSIWYG UI (Swagger UI) to `baml-cli dev` (#1011)" (#1018) - ([f235050](https://github.com/boundaryml/baml/commit/f235050a57916116aff8359236b819ac69011a21)) - Greg Hale + +### Bug fixes + +- Fix python types for BamlValidationError (#1020) - ([520a09c](https://github.com/boundaryml/baml/commit/520a09c478ea8c5eb811447ce9b36689692aa01d)) - aaronvg +- coerce floats and ints with commas and other special cases (#1023) - ([904492e](https://github.com/boundaryml/baml/commit/904492ee298727085e00a391beb628c8d999083e)) - aaronvg + +### Docs + +- Add Docs for Jupyter notebook usage (#1008) - ([c51d918](https://github.com/boundaryml/baml/commit/c51d918f76f63ce55b353661459ba3b27b9a0ea7)) - aaronvg + +## [0.59.0](https://github.com/boundaryml/baml/compare/0.58.0..0.59.0) - 2024-10-04 + +### Features + +- **(vertex)** allow specifying creds as JSON object (#1009) - ([98868da](https://github.com/boundaryml/baml/commit/98868da4e75dde3a00178cbf60afebc501d37b0c)) - Samuel Lijin +- Add prompt, raw_output and error message to BamlValidationError in TS and Python (#1005) - ([447dbf4](https://github.com/boundaryml/baml/commit/447dbf4e0d0cf0744307ef50f89050752334d982)) - aaronvg +- Add BamlValidationError to `baml-cli serve` (#1007) - ([3b8cf16](https://github.com/boundaryml/baml/commit/3b8cf1636594c1a7245a733556efa690da40e139)) - aaronvg +- Include a WYSIWYG UI (Swagger UI) to `baml-cli dev` (#1011) - ([fe9dde4](https://github.com/BoundaryML/baml/commit/fe9dde4f3a7ff0503fd13087da50e4da9d97c3a0)) - imalsogreg + +## [0.58.0](https://github.com/boundaryml/baml/compare/0.57.1..0.58.0) - 2024-10-02 + +### Features + +- Add client registry support for BAML over Rest (OpenAPI) (#1000) - ([abe70bf](https://github.com/boundaryml/baml/commit/abe70bf368c9361a3ab32643735f68e0fafd8425)) - Lorenz Ohly + +### Bug Fixes + +- Improve performance of parsing escaped characters in strings during streaming. (#1002) - ([b35ae2c](https://github.com/boundaryml/baml/commit/b35ae2c4777572206a79af5c2943f5bdd6ada081)) - hellovai + +### Documentation + +- Add Docs for Document Extraction API (#996) - ([da1a5e8](https://github.com/boundaryml/baml/commit/da1a5e876368074235f4474673a1ebfe632e11ed)) - aaronvg + +## [0.57.1](https://github.com/boundaryml/baml/compare/0.57.0..0.57.1) - 2024-09-29 + +### Bug Fixes + +- [BUGFIX] Parser should require a space between class keyword and class name (#990) - ([7528247](https://github.com/boundaryml/baml/commit/752824723404a4ed4c4b1e31c43d140e9346dca2)) - Greg Hale +- Remove dynamic string attributes (#991) - ([0960ab2](https://github.com/boundaryml/baml/commit/0960ab2e0d16c50fef58772336b91297ddac6919)) - Greg Hale +- ts fixes (#992) - ([36af43f](https://github.com/boundaryml/baml/commit/36af43f4f773e1565527916eff7d7837d9f8a983)) - aaronvg +- Bump version to 0.57.1 - ([0aa71dd](https://github.com/boundaryml/baml/commit/0aa71dd4d3aa7082db6a19f0a3a976ff55789d83)) - Aaron Villalpando + +## [0.57.0](https://github.com/boundaryml/baml/compare/0.56.1..0.57.0) - 2024-09-27 + +### Documentation + +- Fix Python dynamic types example (#979) - ([eade116](https://github.com/boundaryml/baml/commit/eade116de14bcc15d738fec911d8653685c13706)) - lorenzoh + +### Features + +- teach vscode/fiddle to explain when we drop information (#897) - ([93e2b9b](https://github.com/boundaryml/baml/commit/93e2b9b8d54a4ced0853ce72596d0b0a9896a0da)) - Samuel Lijin +- Add ability for users to reset env vars to their desire. (#984) - ([69e6c29](https://github.com/boundaryml/baml/commit/69e6c29c82ccc06f8939b9ece75dd7797c8f6b98)) - hellovai + +### Bug Fixes + +- Fixed panic during logging for splitting on UTF-8 strings. (#987) - ([c27a64f](https://github.com/boundaryml/baml/commit/c27a64f6320515cd5ab6385ab93013d3d7ba88b8)) - hellovai +- Improve SAP for triple quoted strings along with unions (#977) - ([44202ab](https://github.com/boundaryml/baml/commit/44202ab63aa3d2881485b9b32fa744797c908e33)) - hellovai +- Add more unit tests for parsing logic inspired by user (#980) - ([48dd09f](https://github.com/boundaryml/baml/commit/48dd09f89b6447cbc1a539ecade66ab4da87b8dc)) - hellovai +- Improve syntax errors e.g. class / enum parsing and also update pestmodel to handle traling comments (#981) - ([adbb6ae](https://github.com/boundaryml/baml/commit/adbb6ae38833d700bfe0123ac712cd90d7e4d970)) - hellovai +- Updating docs for env vars (#985) - ([305d6b3](https://github.com/boundaryml/baml/commit/305d6b3e5a57513adc43c8ab9068c523dfc2e69c)) - hellovai +- When using openai-generic, use a string as the content type in the api request if theres no media (#988) - ([e8fa739](https://github.com/boundaryml/baml/commit/e8fa739838cc124a8eed49103871b1b971063821)) - aaronvg + +## [0.56.1](https://github.com/boundaryml/baml/compare/0.56.0..0.56.1) - 2024-09-21 + +### Bug Fixes + +- Improved parser for unions (#975) - ([b390521](https://github.com/boundaryml/baml/commit/b39052111529f217762b3271846006bec4a604de)) - hellovai +- [syntax] Allow lists to contain trailing comma (#974) - ([9e3dc6c](https://github.com/boundaryml/baml/commit/9e3dc6c90954905a96b599ef28c40094fe48a43e)) - Greg Hale + +## [0.56.0](https://github.com/boundaryml/baml/compare/0.55.3..0.56.0) - 2024-09-20 + +Shout outs to Nico for fixing some internal Rust dependencies, and to Lorenz for correcting our documentation! We really appreciate it :) + +### Features + +- use better default for openapi/rust client (#958) - ([b74ef15](https://github.com/boundaryml/baml/commit/b74ef15fd4dc09ecc7d1ac8284e7f22cd6d5864c)) - Samuel Lijin + +### Bug Fixes + +- push optional-list and optional-map validation to post-parse (#959) - ([c0480d5](https://github.com/boundaryml/baml/commit/c0480d5cfd46ce979e957223dc7b5fa744778552)) - Samuel Lijin +- improve OpenAPI instructions for windows/java (#962) - ([6010efb](https://github.com/boundaryml/baml/commit/6010efbb7990fda966640c3af267de41362d3fa4)) - Samuel Lijin +- assorted fixes: unquoted strings, openai-generic add api_key for bearer auth, support escape characters in quoted strings (#965) - ([847f3a9](https://github.com/boundaryml/baml/commit/847f3a9bb0f00303eae7e410663efc63e54c38b6)) - hellovai +- serde-serialize can cause a package dependency cycle (#967) - ([109ae09](https://github.com/boundaryml/baml/commit/109ae0914852f2ee4a771d27103e4e46ad672647)) - Nico +- make anthropic work in fiddle/vscode (#970) - ([32eccae](https://github.com/boundaryml/baml/commit/32eccae44b27c3fec5fbc3270b6657819d75a426)) - Samuel Lijin +- make dynamic enums work as outputs in Ruby (#972) - ([7530402](https://github.com/boundaryml/baml/commit/7530402f0dc063f10f57cf7aa7f06790574de705)) - Samuel Lijin + +### Documentation + +- suggest correct python init command in vscode readme (#954) - ([e99c5dd](https://github.com/boundaryml/baml/commit/e99c5dd1903078d08aef451e4addc6110d7ca279)) - Samuel Lijin +- add more vscode debugging instructions (#955) - ([342b657](https://github.com/boundaryml/baml/commit/342b657da69441306fa7711d7d14893cf8036f84)) - Samuel Lijin +- NextJS hook needs to be bound to the correct context (#957) - ([ee80451](https://github.com/boundaryml/baml/commit/ee80451de85063b37e658ba58571c791e8514273)) - aaronvg +- update nextjs hooks and docs (#952) - ([01cf855](https://github.com/boundaryml/baml/commit/01cf855500159066fdcd162dc2e2087768d5ba28)) - aaronvg +- Fix some documentation typos (#966) - ([5193cd7](https://github.com/boundaryml/baml/commit/5193cd70686173c863af5ce40fd6bb3792406951)) - Greg Hale +- Keywords AI router (#953) - ([1c6f975](https://github.com/boundaryml/baml/commit/1c6f975d8cc793841745da0db82ee1e2f1908e56)) - aaronvg +- Fix `post_generate` comment (#968) - ([919c79f](https://github.com/boundaryml/baml/commit/919c79fa8cd85a96e6559055b2bb436d925dcb2a)) - lorenzoh + +### Bug Fixes + +- show actionable errors for string[]? and map\<...\>? type validation (#946) - ([48879c0](https://github.com/boundaryml/baml/commit/48879c0744f79b482ef0d2b0624464053558ada4)) - Samuel Lijin + +### Documentation + +- add reference docs about env vars (#945) - ([dd43bc5](https://github.com/boundaryml/baml/commit/dd43bc59087e809e09ca7d3caf628e179a28fc3e)) - Samuel Lijin + +## [0.55.2](https://github.com/boundaryml/baml/compare/0.55.1..0.55.2) - 2024-09-11 + +### Bug Fixes + +- use correct locking strategy inside baml-cli serve (#943) - ([fcb694d](https://github.com/boundaryml/baml/commit/fcb694d033317d8538cc7b2c61aaa94f772778db)) - Samuel Lijin + +### Features + +- allow using DANGER_ACCEPT_INVALID_CERTS to disable https verification (#901) - ([8873fe7](https://github.com/boundaryml/baml/commit/8873fe7577bc879cf0d550063252c4532dcdfced)) - Samuel Lijin + +## [0.55.1](https://github.com/boundaryml/baml/compare/0.55.0..0.55.1) - 2024-09-10 + +### Bug Fixes + +- in generated TS code, put eslint-disable before ts-nocheck - ([16d04c6](https://github.com/BoundaryML/baml/commit/16d04c6e360eefca10b4e0d008b03c34de279491)) - Sam Lijin +- baml-cli in python works again - ([b57ca0f](https://github.com/boundaryml/baml/commit/b57ca0f529c80f59b79b19132a8f1339a6b7bfe2)) - Sam Lijin + +### Documentation + +- update java install instructions (#933) - ([b497003](https://github.com/boundaryml/baml/commit/b49700356f2f69c4acbdc953a66a95224656ffaf)) - Samuel Lijin + +### Miscellaneous Chores + +- add version headers to the openapi docs (#931) - ([21545f2](https://github.com/boundaryml/baml/commit/21545f2a4d9b3987134d98ac720705dde2045290)) - Samuel Lijin + +## [0.55.0](https://github.com/boundaryml/baml/compare/0.54.2..0.55.0) - 2024-09-09 + +With this release, we're announcing support for BAML in all languages: we now +allow you to call your functions over an HTTP interface, and will generate an +OpenAPI specification for your BAML functions, so you can now generate a client +in any language of your choice, be it Golang, Java, PHP, Ruby, Rust, or any of +the other languages which OpenAPI supports. + +Start here to learn more: https://docs.boundaryml.com/docs/get-started/quickstart/openapi + +### Features + +- implement BAML-over-HTTP (#908) - ([484fa93](https://github.com/boundaryml/baml/commit/484fa93a5a4b4677f531e6ef03bb88d144925c12)) - Samuel Lijin +- Add anonymous telemetry about playground actions (#925) - ([6f58c9e](https://github.com/boundaryml/baml/commit/6f58c9e3e464a8e774771706c2b0d76adb9e6cda)) - hellovai + +## [0.54.2](https://github.com/boundaryml/baml/compare/0.54.1..0.54.2) - 2024-09-05 + +### Features + +- Add a setting to disable restarting TS server in VSCode (#920) - ([628f236](https://github.com/boundaryml/baml/commit/628f2360c415fa8a7b0cd90d7249733ff06acaa9)) - aaronvg +- Add prompt prefix for map types in ctx.output_format and add more type validation for map params (#919) - ([4d304c5](https://github.com/boundaryml/baml/commit/4d304c583b9188c1963a34e2a153baaf003e36ac)) - hellovai + +### Bug fixes + +- Fix glibC issues for python linux-x86_64 (#922) - ([9161bec](https://github.com/boundaryml/baml/commit/9161becccf626f8d13a15626481720f29e0f992c)) - Samuel Lijin + +### Documentation + +- Add nextjs hooks (#921) - ([fe14f5a](https://github.com/boundaryml/baml/commit/fe14f5a4ef95c9ccda916ff80ce852d3855554a3)) - aaronvg + +## [0.54.1](https://github.com/boundaryml/baml/compare/0.54.0..0.54.1) - 2024-09-03 + +### BREAKING CHANGE + +- Fix escape characters in quoted strings (#905) - ([9ba6eb8](https://github.com/boundaryml/baml/commit/9ba6eb834e0145f4c57e582b63730d3d0ac9b2e9)) - hellovai + +Prior `"\n"` was interpreted as `"\\n"` in quoted strings. This has been fixed to interpret `"\n"` as newline characters and true for other escape characters. + +### Documentation + +- updated dead vs-code-extension link (#914) - ([b12f164](https://github.com/boundaryml/baml/commit/b12f1649cf5bfd0d457c5d6d117fd3a21ba5dc6b)) - Christian Warmuth +- Update docs for setting env vars (#904) - ([ec1ca94](https://github.com/boundaryml/baml/commit/ec1ca94c91af2a51b4190a0bad0e0bc1c052f2a3)) - hellovai +- Add docs for LMStudio (#906) - ([ea4c187](https://github.com/boundaryml/baml/commit/ea4c18782de1f713e8d69d473f9e1818c97024c6)) - hellovai +- Fix docs for anthropic (#910) - ([aba2764](https://github.com/boundaryml/baml/commit/aba2764e5b04820d00b08bf52bda603ee27631f1)) - hellovai +- Update discord links on docs (#911) - ([927357d](https://github.com/boundaryml/baml/commit/927357dd64b36c25513352ed4968ebc62dad6132)) - hellovai + +### Features + +- BAML_LOG will truncate messages to 1000 characters (modify using env var BOUNDARY_MAX_LOG_CHUNK_SIZE) (#907) - ([d266e5c](https://github.com/boundaryml/baml/commit/d266e5c4157f3b28d2f6454a7ea265dda7296bb2)) - hellovai + +### Bug Fixes + +- Improve parsing parsing when there are initial closing `]` or `}` (#903) - ([46b0cde](https://github.com/boundaryml/baml/commit/46b0cdeffb15bbab20a43728f52ad2a05623e6f7)) - hellovai +- Update build script for ruby to build all platforms (#915) - ([df2f51e](https://github.com/boundaryml/baml/commit/df2f51e52615451b3643cc124e7262f11965f3ef)) - hellovai +- Add unit-test for openai-generic provider and ensure it compiles (#916) - ([fde7c50](https://github.com/boundaryml/baml/commit/fde7c50c939c505906417596d16c7c4607173339)) - hellovai + +## [0.54.0](https://github.com/boundaryml/baml/compare/0.53.1..0.54.0) - 2024-08-27 + +### BREAKING CHANGE + +- Update Default Gemini Base URL to v1beta (#891) - ([a5d8c58](https://github.com/boundaryml/baml/commit/a5d8c588e0fd0b7e186d7c71f1f6171334250629)) - gleed + +The default base URL for the Gemini provider has been updated to v1beta. This change is should have no impact on existing users as v1beta is the default version for the Gemini python library, we are mirroring this change in BAML. + +### Bug Fixes + +- Allow promptfiddle to talk to localhost ollama (#886) - ([5f02b2a](https://github.com/boundaryml/baml/commit/5f02b2ac688ceeb5a34e848a8ff87fd43a6b093a)) - Samuel Lijin +- Update Parser for unions so they handle nested objects better (#900) - ([c5b9a75](https://github.com/boundaryml/baml/commit/c5b9a75ea6da7c45da1999032e2b256bec97d922)) - hellovai + +### Documentation + +- Add ollama to default prompt fiddle example (#888) - ([49146c0](https://github.com/boundaryml/baml/commit/49146c0e50c88615e4cc97adb595849c23bad8ae)) - Samuel Lijin +- Adding improved docs + unit tests for caching (#895) - ([ff7be44](https://github.com/boundaryml/baml/commit/ff7be4478b706da049085d432b2ec98627b5da1f)) - hellovai + +### Features + +- Allow local filepaths to be used in tests in BAML files (image and audio) (#871) - ([fa6dc03](https://github.com/boundaryml/baml/commit/fa6dc03fcdd3255dd83e25d0bfb3b0e740991408)) - Samuel Lijin +- Add support for absolute file paths in the file specifier (#881) - ([fcd189e](https://github.com/boundaryml/baml/commit/fcd189ed7eb81712bf3b641eb3dde158fc6a62af)) - hellovai +- Implement shorthand clients (You can now use "openai/gpt-4o" as short for creating a complete client.) (#879) - ([ddd15c9](https://github.com/boundaryml/baml/commit/ddd15c92c3e8d81c24cb7305c9fcbb36b819900f)) - Samuel Lijin +- Add support for arbritrary metadata (e.g. cache_policy for anthropic) (#893) - ([0d63a70](https://github.com/boundaryml/baml/commit/0d63a70332477761a97783e203c98fd0bf67f151)) - hellovai +- Expose Exceptions to user code: BamlError, BamlInvalidArgumentError, BamlClientError, BamlClientHttpError, BamlValidationError (#770) - ([7da14c4](https://github.com/boundaryml/baml/commit/7da14c480506e9791b3f4ce52ac73836a042d38a)) - hellovai + +### Internal + +- AST Restructuring (#857) - ([75b51cb](https://github.com/boundaryml/baml/commit/75b51cbf80a0c8ba19ae05b021ef3c94dacb4e30)) - Anish Palakurthi + +## [0.53.1](https://github.com/boundaryml/baml/compare/0.53.0..0.53.1) - 2024-08-11 + +### Bug Fixes + +- fix github release not passing params to napi script causing issues in x86_64 (#872) + +- ([06b962b](https://github.com/boundaryml/baml/commit/06b962b945f958bf0637d13fec22bd2d59c64c5f)) - aaronvg + +### Features + +- Add Client orchestration graph in playground (#801) - ([24b5895](https://github.com/boundaryml/baml/commit/24b5895a1f45ac04cba0f19e6da727b5ee766186)) - Anish Palakurthi +- increase range of python FFI support (#870) - ([ec9b66c](https://github.com/boundaryml/baml/commit/ec9b66c31faf97a58c81c264c7fa1b32e0e9f0ae)) - Samuel Lijin + +### Misc + +- Bump version to 0.53.1 - ([e4301e3](https://github.com/boundaryml/baml/commit/e4301e37835483f51edf1cad6478e46ff67508fc)) - Aaron Villalpando + +## [0.53.0](https://github.com/boundaryml/baml/compare/0.52.1..0.53.0) - 2024-08-05 + +### Bug Fixes + +- make image[] render correctly in prompts (#855) - ([4a17dce](https://github.com/boundaryml/baml/commit/4a17dce43c05efd5f4ea304f2609fe140de1dd8c)) - Samuel Lijin + +### Features + +- **(ruby)** implement dynamic types, dynamic clients, images, and audio (#842) - ([4a21eed](https://github.com/boundaryml/baml/commit/4a21eed668f32b042fba61f24c9efb8b3794a420)) - Samuel Lijin +- Codelenses for test cases (#812) - ([7cd8794](https://github.com/boundaryml/baml/commit/7cd87942bf50a72de0ad46154f164fb2c174f25b)) - Anish Palakurthi + +### Issue + +- removed vertex auth token printing (#846) - ([b839316](https://github.com/boundaryml/baml/commit/b83931665a2c3b840eb6c6d31cf3d01c7926e52e)) - Anish Palakurthi +- Fix google type deserialization issue - ([a55b9a1](https://github.com/boundaryml/baml/commit/a55b9a106176ed1ce34bb63397610c2640b37f16)) - Aaron Villalpando + +### Miscellaneous Chores + +- clean up release stuff (#836) - ([eed41b7](https://github.com/boundaryml/baml/commit/eed41b7474417d2e65b2c5d742234cc20fc5644e)) - Samuel Lijin +- Add bfcl results to readme, fix links icons (#856) - ([5ef7f3d](https://github.com/boundaryml/baml/commit/5ef7f3db99d8d23ff97f1e8372ee71ab7aa127aa)) - aaronvg +- Fix prompt fiddle and playground styles, add more logging, and add stop-reason to playground (#858) - ([38e3153](https://github.com/boundaryml/baml/commit/38e3153843a17ae1e87ae9879ab4374b083d77d0)) - aaronvg +- Bump version to 0.53.0 - ([fd16839](https://github.com/boundaryml/baml/commit/fd16839a2c0b9d92bd5bdcb57f950e22d0a29959)) - Aaron Villalpando + +## [0.52.1](https://github.com/boundaryml/baml/compare/0.52.0..0.52.1) - 2024-07-24 + +### Bug Fixes + +- build python x86_64-linux with an older glibc (#834) - ([db12540](https://github.com/boundaryml/baml/commit/db12540a92abf055e286c60864299f53c246b62a)) - Samuel Lijin + +## [0.52.0](https://github.com/boundaryml/baml/compare/0.51.3..0.52.0) - 2024-07-24 + +### Features + +- Add official support for ruby (#823) - ([e81cc79](https://github.com/boundaryml/baml/commit/e81cc79498809a79f427864704b140967a41277a)) - Samuel Lijin + +### Bug Fixes + +- Fix ClientRegistry for Typescript code-gen (#828) - ([b69921f](https://github.com/boundaryml/baml/commit/b69921f45df0182072b09ab28fe6231ccfaa5767)) - hellovai + +## [0.51.2](https://github.com/boundaryml/baml/compare/0.51.1..0.51.2) - 2024-07-24 + +### Features + +- Add support for unions / maps / null in TypeBuilder. (#820) - ([8d9e92d](https://github.com/boundaryml/baml/commit/8d9e92d3050a67edbec5ee6056397becbcdb754b)) - hellovai + +### Bug Fixes + +- [Playground] Add a feedback button (#818) - ([f749f2b](https://github.com/boundaryml/baml/commit/f749f2b19b247de2f050beccd1fe8e50b7625757)) - Samuel Lijin + +### Documentation + +- Improvements across docs (#807) - ([bc0c176](https://github.com/boundaryml/baml/commit/bc0c1761699ee2485a0a8ee61cf4fda6b579f974)) - Anish Palakurthi + +## [0.51.1](https://github.com/boundaryml/baml/compare/0.51.0..0.51.1) - 2024-07-21 + +### Features + +- Add a feedback button to VSCode Extension (#811) - ([f371912](https://github.com/boundaryml/baml/commit/f3719127174d8f998579747f14fae8675dafba4c)) - Samuel Lijin + +### Bug + +- Allow default_client_mode in the generator #813 (#815) - ([6df7fca](https://github.com/boundaryml/baml/commit/6df7fcabc1eb55b08a50741f2346440f631abd63)) - hellovai + +## [0.51.0](https://github.com/boundaryml/baml/compare/0.50.0..0.51.0) - 2024-07-19 + +### Bug Fixes + +- Improve BAML Parser for numbers and single-key objects (#785) - ([c5af7b0](https://github.com/boundaryml/baml/commit/c5af7b0d0e881c3046171ca17f317d820e8882e3)) - hellovai +- Add docs for VLLM (#792) - ([79e8773](https://github.com/boundaryml/baml/commit/79e8773e38da524795dda606b9fae09a274118e1)) - hellovai +- LLVM install and rebuild script (#794) - ([9ee66ed](https://github.com/boundaryml/baml/commit/9ee66ed2dd14bc0ee12a788f41eae64377e7f2b0)) - Anish Palakurthi +- Prevent version mismatches when generating baml_client (#791) - ([d793603](https://github.com/boundaryml/baml/commit/d7936036e6afa4a0e738242cfb3feaa9e15b3657)) - aaronvg +- fiddle build fix (#800) - ([d304203](https://github.com/boundaryml/baml/commit/d304203241726ac0ba8781db7ac5693339189eb4)) - aaronvg +- Dont drop extra fields in dynamic classes when passing them as inputs to a function (#802) - ([4264c9b](https://github.com/boundaryml/baml/commit/4264c9b143edda0239af197d110357b1969bf12c)) - aaronvg +- Adding support for a sync client for Python + Typescript (#803) - ([62085e7](https://github.com/boundaryml/baml/commit/62085e79d4d86f580ce189bc60f36bd1414893c4)) - hellovai +- Fix WASM-related issues introduced in #803 (#804) - ([0a950e0](https://github.com/boundaryml/baml/commit/0a950e084748837ee2e269504d22dba66f339ca4)) - hellovai +- Adding various fixes (#806) - ([e8c1a61](https://github.com/boundaryml/baml/commit/e8c1a61a96051160566b6458dac5c89d5ddfb86e)) - hellovai + +### Features + +- implement maps in BAML (#797) - ([97d7e62](https://github.com/boundaryml/baml/commit/97d7e6223c68e9c338fe7110554f1f26b966f7e3)) - Samuel Lijin +- Support Vertex AI (Google Cloud SDK) (#790) - ([d98ee81](https://github.com/boundaryml/baml/commit/d98ee81a9440de0aaa6de05b33b8d3f709003a00)) - Anish Palakurthi +- Add copy buttons to test results in playground (#799) - ([b5eee3d](https://github.com/boundaryml/baml/commit/b5eee3d15a1be4373e25cc8ef1cf6e70d5dd39c9)) - aaronvg + +### Miscellaneous Chores + +- in fern config, defer to installed version (#789) - ([479f1b2](https://github.com/boundaryml/baml/commit/479f1b2b0b52faf47bc529e4c06c533a9467269a)) - fern +- publish docs on every push to the default branch (#796) - ([180824a](https://github.com/boundaryml/baml/commit/180824a3857a32eae679e4df5704abba3aa6246c)) - Samuel Lijin +- 🌿 introducing fern docs (#779) - ([46f06a9](https://github.com/boundaryml/baml/commit/46f06a95a1e262e62476768b812b372b696da1be)) - fern +- Add test for dynamic list input (#798) - ([7528d6a](https://github.com/boundaryml/baml/commit/7528d6ae10427c1304e356cf5b3c664e4fb2b1b1)) - aaronvg + +## [0.50.0](https://github.com/boundaryml/baml/compare/0.49.0..0.50.0) - 2024-07-11 + +### Bug Fixes + +- [Playground] Environment variable button is now visible on all themes (#762) - ([adc4da1](https://github.com/boundaryml/baml/commit/adc4da1fa36cc9c30ea36e25de1a6cefcce0bc97)) - aaronvg +- [Playground] Fix to cURL rendering and mime_type overriding (#763) - ([67f9c6a](https://github.com/boundaryml/baml/commit/67f9c6add5ea8bbbd5ee82c28476fe0ebbefe344)) - Anish Palakurthi + +### Features + +- [Runtime] Add support for clients that change at runtime using ClientRegistry (#683) - ([c0fb454](https://github.com/boundaryml/baml/commit/c0fb4540d9193194fcafd7fcef71468442d9e6fa)) - hellovai + https://docs.boundaryml.com/docs/calling-baml/client-registry + +### Documentation + +- Add more documentation for TypeBuilder (#767) - ([85dc8ab](https://github.com/boundaryml/baml/commit/85dc8ab41e0df3267249a1efc4a95f010e52cc73)) - Samuel Lijin + +## [0.49.0](https://github.com/boundaryml/baml/compare/0.46.0..0.49.0) - 2024-07-08 + +### Bug Fixes + +- Fixed Azure / Ollama clients. Removing stream_options from azure and ollama clients (#760) - ([30bf88f](https://github.com/boundaryml/baml/commit/30bf88f65c8583ab02db6a7b7db40c1e9f3b05b6)) - hellovai + +### Features + +- Add support for arm64-linux (#751) - ([adb8ee3](https://github.com/boundaryml/baml/commit/adb8ee3097fd386370f75b3ba179d18b952e9678)) - Samuel Lijin + +## [0.48.0](https://github.com/boundaryml/baml/compare/0.47.0..0.48.0) - 2024-07-04 + +### Bug Fixes + +- Fix env variables dialoge on VSCode (#750) +- Playground selects correct function after loading (#757) - ([09963a0](https://github.com/boundaryml/baml/commit/09963a02e581da9eb8f7bafd3ba812058c97f672)) - aaronvg + +### Miscellaneous Chores + +- Better error messages on logging failures to Boundary Studio (#754) - ([49c768f](https://github.com/boundaryml/baml/commit/49c768fbe8eb8023cba28b8dc68c2553d8b2318a)) - aaronvg + +## [0.47.0](https://github.com/boundaryml/baml/compare/0.46.0..0.47.0) - 2024-07-03 + +### Bug Fixes + +- make settings dialog work in vscode again (#750) ([c94e355](https://github.com/boundaryml/baml/commit/c94e35551872f65404136b60f800fb1688902c11)) - aaronvg +- restore releases on arm64-linux (#751) - ([adb8ee3](https://github.com/boundaryml/baml/commit/adb8ee3097fd386370f75b3ba179d18b952e9678)) - Samuel Lijin + +## [0.46.0](https://github.com/boundaryml/baml/compare/0.45.0..0.46.0) - 2024-07-03 + +### Bug Fixes + +- Fixed tracing issues for Boundary Studio (#740) - ([77a4db7](https://github.com/boundaryml/baml/commit/77a4db7ef4b939636472ad4975d74e9d1a577cbf)) - Samuel Lijin +- Fixed flush() to be more reliable (#744) - ([9dd5fda](https://github.com/boundaryml/baml/commit/9dd5fdad5c2897b49a5a536df2e9ef775857a39d)) - Samuel Lijin +- Remove error when user passes in extra fields in a class (#746) - ([2755b43](https://github.com/boundaryml/baml/commit/2755b43257f9405ae66a30982d9711fc3f2c0854)) - aaronvg + +### Features + +- Add support for base_url for the google-ai provider (#747) - ([005b1d9](https://github.com/boundaryml/baml/commit/005b1d93b7f7d2aa12a1487911766cccd9c25e98)) - hellovai +- Playground UX improvements (#742) - ([5cb56fd](https://github.com/boundaryml/baml/commit/5cb56fdc39496f0aedacd79766c0e93cb0e401b8)) - hellovai +- Prompt Fiddle now auto-switches functions when to change files (#745) + +### Documentation + +- Added a large example project on promptfiddle.com (#741) - ([f80da1e](https://github.com/boundaryml/baml/commit/f80da1e1dd11f0457b5789bc9ce6923a8ed88b51)) - aaronvg +- Mark ruby as in beta (#743) - ([901109d](https://github.com/boundaryml/baml/commit/901109dbb327e6e3e1b65fda37100fcd45f97e07)) - Samuel Lijin + +## [0.45.0](https://github.com/boundaryml/baml/compare/0.44.0..0.45.0) - 2024-06-29 + +### Bug Fixes + +- Fixed streaming in Python Client which didn't show result until later (#726) - ([e4f2daa](https://github.com/boundaryml/baml/commit/e4f2daa9e85bb1711d112fb0c87c0d769be0bb2d)) - Anish Palakurthi +- Improve playground stability on first load (#732) - ([2ac7b32](https://github.com/boundaryml/baml/commit/2ac7b328e89400cba0d9eb4f6d09c6a03feb71a5)) - Anish Palakurthi +- Add improved static analysis for jinja (#734) - ([423faa1](https://github.com/boundaryml/baml/commit/423faa1af5a594b7f78f7bb5620e3146a8989da5)) - hellovai + +### Documentation + +- Docs for Dynamic Types (#722) [https://docs.boundaryml.com/docs/calling-baml/dynamic-types](https://docs.boundaryml.com/docs/calling-baml/dynamic-types) + +### Features + +- Show raw cURL request in Playground (#723) - ([57928e1](https://github.com/boundaryml/baml/commit/57928e178549cb3e5118ce374aab5d0fbad7038b)) - Anish Palakurthi +- Support bedrock as a provider (#725) - ([c64c665](https://github.com/boundaryml/baml/commit/c64c66522a1d496493a30f593103209acd201364)) - Samuel Lijin + +## [0.44.0](https://github.com/boundaryml/baml/compare/0.43.0..0.44.0) - 2024-06-26 + +### Bug Fixes + +- Fix typebuilder for random enums (#721) + +## [0.43.0](https://github.com/boundaryml/baml/compare/0.42.0..0.43.0) - 2024-06-26 + +### Bug Fixes + +- fix pnpm lockfile issue (#720) + +## [0.42.0](https://github.com/boundaryml/baml/compare/0.41.0..0.42.0) - 2024-06-26 + +### Bug Fixes + +- correctly propagate LICENSE to baml-py (#695) - ([3fda880](https://github.com/boundaryml/baml/commit/3fda880bf39b32191b425ae75e8b491d10884cf6)) - Samuel Lijin + +### Miscellaneous Chores + +- update jsonish readme (#685) - ([b19f04a](https://github.com/boundaryml/baml/commit/b19f04a059ba18d54544cb278b6990b95170d3f3)) - Samuel Lijin + +### Vscode + +- add link to tracing, show token counts (#703) - ([64aa18a](https://github.com/boundaryml/baml/commit/64aa18a9cc34071655141c8f6e2ad04ac90e7be1)) - Samuel Lijin + +## [0.41.0] - 2024-06-20 + +### Bug Fixes + +- rollback git lfs, images broken in docs rn (#534) - ([6945506](https://github.com/boundaryml/baml/commit/694550664fa45b5f76987e2663c9d7e7a9a6a2d2)) - Samuel Lijin +- search for markdown blocks correctly (#641) - ([6b8abf1](https://github.com/boundaryml/baml/commit/6b8abf1ccf55bbe7c3bc1046c78081126e01f134)) - Samuel Lijin +- restore one-workspace-per-folder (#656) - ([a464bde](https://github.com/boundaryml/baml/commit/a464bde566199ace45285a78a7f542cd7217fb65)) - Samuel Lijin +- ruby generator should be ruby/sorbet (#661) - ([0019f39](https://github.com/boundaryml/baml/commit/0019f3951b8fe2b49e62eb11d869516b8088e9cb)) - Samuel Lijin +- ruby compile error snuck in (#663) - ([0cb2583](https://github.com/boundaryml/baml/commit/0cb25831788eb8b3eb0a38383917f6d1ffb5633a)) - Samuel Lijin + +### Documentation + +- add typescript examples (#477) - ([532481c](https://github.com/boundaryml/baml/commit/532481c3df4063b37a8834a5fe2bbce3bb37d2f5)) - Samuel Lijin +- add titles to code blocks for all CodeGroup elems (#483) - ([76c6b68](https://github.com/boundaryml/baml/commit/76c6b68b27ee37972fa226be0b4dfe31f7b4b5ec)) - Samuel Lijin +- add docs for round-robin clients (#500) - ([221f902](https://github.com/boundaryml/baml/commit/221f9020d850e6d24fe2fd8a684081726a0659af)) - Samuel Lijin +- add ruby example (#689) - ([16e187f](https://github.com/boundaryml/baml/commit/16e187f6698a1cc86a37eedf2447648d810370ad)) - Samuel Lijin + +### Features + +- implement `baml version --check --output json` (#444) - ([5f076ac](https://github.com/boundaryml/baml/commit/5f076ace1f92dc2141b231c9e62f4dc23f7fef18)) - Samuel Lijin +- show update prompts in vscode (#451) - ([b66da3e](https://github.com/boundaryml/baml/commit/b66da3ee355fcd6a8677d834ecb05af44cbf8f20)) - Samuel Lijin +- add tests to check that baml version --check works (#454) - ([be1499d](https://github.com/boundaryml/baml/commit/be1499dfa82ff8ab923a16d45290758120d95015)) - Samuel Lijin +- parse typescript versions in version --check (#473) - ([b4b2250](https://github.com/boundaryml/baml/commit/b4b2250c37b900db899256159bbfc3aa2ec819cb)) - Samuel Lijin +- implement round robin client strategies (#494) - ([599fcdd](https://github.com/boundaryml/baml/commit/599fcdd2a45c5b1e935f36769784ca944566b88c)) - Samuel Lijin +- add integ-tests support to build (#542) - ([f59cf2e](https://github.com/boundaryml/baml/commit/f59cf2e1a9ec7edbe174f4bc7ff9391f2cff3208)) - Samuel Lijin +- make ruby work again (#650) - ([6472bec](https://github.com/boundaryml/baml/commit/6472bec231b581076ee7edefaab2e7979b2bf336)) - Samuel Lijin +- Add RB2B tracking script (#682) - ([54547a3](https://github.com/boundaryml/baml/commit/54547a34d40cd40a43767919dbc9faa68a82faea)) - hellovai + +### Miscellaneous Chores + +- add nodemon config to typescript/ (#435) - ([231b396](https://github.com/boundaryml/baml/commit/231b3967bc947c4651156bc55fd66552782824c9)) - Samuel Lijin +- finish gloo to BoundaryML renames (#452) - ([88a7fda](https://github.com/boundaryml/baml/commit/88a7fdacc826e78ef21c6b24745ee469d9d02e6a)) - Samuel Lijin +- set up lfs (#511) - ([3a43143](https://github.com/boundaryml/baml/commit/3a431431e8e38dfc68763f15ccdcd1d131f23984)) - Samuel Lijin +- add internal build tooling for sam (#512) - ([9ebacca](https://github.com/boundaryml/baml/commit/9ebaccaa542760cb96382ae2a91d780f1ade613b)) - Samuel Lijin +- delete clients dir, this is now dead code (#652) - ([ec2627f](https://github.com/boundaryml/baml/commit/ec2627f59c7fe9edfff46fcdb65f9b9f0e2e072c)) - Samuel Lijin +- consolidate vscode workspace, bump a bunch of deps (#654) - ([82bf6ab](https://github.com/boundaryml/baml/commit/82bf6ab1ad839f84782a7ef0441f21124c368757)) - Samuel Lijin +- Add RB2B tracking script to propmt fiddle (#681) - ([4cf806b](https://github.com/boundaryml/baml/commit/4cf806bba26563fd8b6ddbd68296ab8bdfac21c4)) - hellovai +- Adding better release script (#688) - ([5bec282](https://github.com/boundaryml/baml/commit/5bec282d39d2250b39ef4aba5d6bba9830a35988)) - hellovai + +### [AUTO + +- patch] Version bump for nightly release [NIGHTLY:cli] [NIGHTLY:vscode_ext] [NIGHTLY:client-python] - ([d05a22c](https://github.com/boundaryml/baml/commit/d05a22ca4135887738adbce638193d71abca42ec)) - GitHub Action + +### Build + +- fix baml-core-ffi script (#521) - ([b1b7f4a](https://github.com/boundaryml/baml/commit/b1b7f4af0991ef6453f888f27930f3faaae337f5)) - Samuel Lijin +- fix engine/ (#522) - ([154f646](https://github.com/boundaryml/baml/commit/154f6468ec0aa6de1b033ee1cbc76e60acc363ea)) - Samuel Lijin + +### Integ-tests + +- add ruby test - ([c0bc101](https://github.com/boundaryml/baml/commit/c0bc10126ea32d099f1398f2c5faa08b111554ba)) - Sam Lijin + +### Readme + +- add function calling, collapse the table (#505) - ([2f9024c](https://github.com/boundaryml/baml/commit/2f9024c28ba438267de37ac43c6570a2f0398b5a)) - Samuel Lijin + +### Release + +- bump versions for everything (#662) - ([c0254ae](https://github.com/boundaryml/baml/commit/c0254ae680365854c51c7a4e58ea68d1901ea033)) - Samuel Lijin + +### Vscode + +- check for updates on the hour (#434) - ([c70a3b3](https://github.com/boundaryml/baml/commit/c70a3b373cb2346a0df9a1eba0ebacb74d59b53e)) - Samuel Lijin diff --git a/fern/pages/welcome.mdx b/fern/pages/welcome.mdx new file mode 100644 index 000000000..55dd7d756 --- /dev/null +++ b/fern/pages/welcome.mdx @@ -0,0 +1,87 @@ +--- +title: 🏠 Welcome +description: The easiest way to use LLMs +slug: home +layout: overview +hide-toc: false +--- + +**BAML is a domain-specific language to generate structured outputs from LLMs -- with the best developer experience.** + +With BAML you can build reliable Agents, Chatbots with RAG, extract data from PDFs, and more. + +### A small sample of features: +1. **An amazingly fast developer experience** for prompting in the BAML VSCode playground +2. **Fully type-safe outputs**, even when streaming structured data (that means autocomplete!) +3. **Flexibility** -- it works with **any LLM**, **any language**, and **any schema**. +4. **State-of-the-art structured outputs** that even [outperform OpenAI with their own models](https://www.boundaryml.com/blog/sota-function-calling?q=0) -- plus it works with OpenSource models. + + +## Products + + + + Everything you need to know about how to get started with BAML. From installation to prompt engineering techniques. + + + An online interactive playground to playaround with BAML without any installations. + + + Examples of prompts, projects, and more. + + + Language docs on all BAML syntax. Quickly learn syntax with simple examples and code snippets. + + + +## Motivation + +Prompts are more than just f-strings; they're actual functions with logic that can quickly become complex to organize, maintain, and test. + +Currently, developers craft LLM prompts as if they're writing raw HTML and CSS in text files, lacking: +- Type safety +- Hot-reloading or previews +- Linting + +The situation worsens when dealing with structured outputs. Since most prompts rely on Python and Pydantic, developers must _execute_ their code and set up an entire Python environment just to test a minor prompt adjustment, or they have to setup a whole Python microservice just to call an LLM. + +BAML allows you to view and run prompts directly within your editor, similar to how Markdown Preview function -- no additional setup necessary, that interoperates with all your favorite languages and frameworks. + +Just as TSX/JSX provided the ideal abstraction for web development, BAML offers the perfect abstraction for prompt engineering. Watch our [demo video](/guide/introduction/what-is-baml#demo-video) to see it in action. + +## Comparisons + +Here's our in-depth comparison with a couple of popular frameworks: +- [BAML vs Pydantic](/guide/comparisons/baml-vs-pydantic) +- [BAML vs Marvin](/guide/comparisons/baml-vs-marvin) + +{/* +
+ Insert something powerful here. +
+ + + + + + */} \ No newline at end of file diff --git a/fern/pnpm-lock.yaml b/fern/pnpm-lock.yaml new file mode 100644 index 000000000..31f019479 --- /dev/null +++ b/fern/pnpm-lock.yaml @@ -0,0 +1,17 @@ +lockfileVersion: '6.0' + +settings: + autoInstallPeers: true + excludeLinksFromLockfile: false + +devDependencies: + fern-api: + specifier: ^0.31.24 + version: 0.31.24 + +packages: + + /fern-api@0.31.24: + resolution: {integrity: sha512-hO0BY0q3+//OVLALI6875Sh6OlMPRJG4HeIRjIaX4ZmMtPbsZKMoowPSKWyewwnw2uYotklIgIsZWLKoAo7C3A==} + hasBin: true + dev: true diff --git a/fern/snippets/allowed-role-metadata-basic.mdx b/fern/snippets/allowed-role-metadata-basic.mdx new file mode 100644 index 000000000..7200c7ec7 --- /dev/null +++ b/fern/snippets/allowed-role-metadata-basic.mdx @@ -0,0 +1,34 @@ + + Which role metadata should we forward to the API? **Default: `[]`** + + For example you can set this to `["foo", "bar"]` to forward the cache policy to the API. + + If you do not set `allowed_role_metadata`, we will not forward any role metadata to the API even if it is set in the prompt. + + Then in your prompt you can use something like: + ```baml + client Foo { + provider openai + options { + allowed_role_metadata: ["foo", "bar"] + } + } + + client FooWithout { + provider openai + options { + } + } + template_string Foo() #" + {{ _.role('user', foo={"type": "ephemeral"}, bar="1", cat=True) }} + This will be have foo and bar, but not cat metadata. But only for Foo, not FooWithout. + {{ _.role('user') }} + This will have none of the role metadata for Foo or FooWithout. + "# + ``` + + You can use the playground to see the raw curl request to see what is being sent to the API. + \ No newline at end of file diff --git a/fern/snippets/allowed-role-metadata.mdx b/fern/snippets/allowed-role-metadata.mdx new file mode 100644 index 000000000..cdfdaa8bc --- /dev/null +++ b/fern/snippets/allowed-role-metadata.mdx @@ -0,0 +1,41 @@ + + Which role metadata should we forward to the API? **Default: `[]`** + + For example you can set this to `["cache_control"]` to forward the cache policy to the API. + + If you do not set `allowed_role_metadata`, we will not forward any role metadata to the API even if it is set in the prompt. + + Then in your prompt you can use something like: + ```baml + client ClaudeWithCaching { + provider anthropic + options { + model claude-3-haiku-20240307 + api_key env.ANTHROPIC_API_KEY + max_tokens 1000 + allowed_role_metadata ["cache_control"] + headers { + "anthropic-beta" "prompt-caching-2024-07-31" + } + } + } + + client FooWithout { + provider anthropic + options { + } + } + + template_string Foo() #" + {{ _.role('user', cache_control={"type": "ephemeral"}) }} + This will be cached for ClaudeWithCaching, but not for FooWithout! + {{ _.role('user') }} + This will not be cached for Foo or FooWithout! + "# + ``` + + You can use the playground to see the raw curl request to see what is being sent to the API. + \ No newline at end of file diff --git a/fern/snippets/client-constructor.mdx b/fern/snippets/client-constructor.mdx new file mode 100644 index 000000000..42c0b2aeb --- /dev/null +++ b/fern/snippets/client-constructor.mdx @@ -0,0 +1,29 @@ + +This configures which provider to use. The provider is responsible for handling the actual API calls to the LLM service. The provider is a required field. + +The configuration modifies the URL request BAML runtime makes. + +| Provider Name | Docs | Notes | +| ---------------- | ------------------------------------------------------------------- | ---------------------------------------------------------- | +| `anthropic` | [Anthropic](/docs/snippets/clients/providers/anthropic) | | +| `aws-bedrock` | [AWS Bedrock](/docs/snippets/clients/providers/aws-bedrock) | | +| `azure-openai` | [Azure OpenAI](/docs/snippets/clients/providers/azure) | | +| `google-ai` | [Google AI](/docs/snippets/clients/providers/gemini) | | +| `openai` | [OpenAI](/docs/snippets/clients/providers/openai) | | +| `openai-generic` | [OpenAI (generic)](/docs/snippets/clients/providers/openai-generic) | Any model provider that supports an OpenAI-compatible API | +| `vertex-ai` | [Vertex AI](/docs/snippets/clients/providers/vertex) | | + +We also have some special providers that allow composing clients together: +| Provider Name | Docs | Notes | +| -------------- | -------------------------------- | ---------------------------------------------------------- | +| `fallback` | [Fallback](/docs/snippets/clients/fallback) | Used to chain models conditional on failures | +| `round-robin` | [Round Robin](/docs/snippets/clients/round-robin) | Used to load balance | + + + + +These vary per provider. Please see provider specific documentation for more +information. Generally they are pass through options to the POST request made +to the LLM. + + diff --git a/fern/snippets/setting-env-vars.mdx b/fern/snippets/setting-env-vars.mdx new file mode 100644 index 000000000..c55cd5129 --- /dev/null +++ b/fern/snippets/setting-env-vars.mdx @@ -0,0 +1,76 @@ +To set environment variables: + + + + + +Once you open a `.baml` file, in VSCode, you should see a small button over every BAML function: `Open Playground`. + +Then you should be able to set environment variables in the settings tab. + + + +Or type `BAML Playground` in the VSCode Command Bar (`CMD + Shift + P` or `CTRL + Shift + P`) to open the playground. + + + + + BAML will expect these to be set already in your program **before** you import the baml_client in Python/ TS / etc. + + Any of the following strategies for setting env vars are compatible with BAML: + - setting them in your shell before running your program + - in your `Dockerfile` + - in your `next.config.js` + - in your Kubernetes manifest + - from secrets-store.csi.k8s.io + - from a secrets provider such as [Infisical](https://infisical.com/) / [Doppler](https://www.doppler.com/) + - from a `.env` file (using `dotenv` cli) + - using account credentials for ephemeral token generation (e.g. Vertex AI Auth Tokens) + + ```bash + export MY_SUPER_SECRET_API_KEY="..." + python my_program_using_baml.py + ``` + + + + + Requires BAML Version 0.57+ + + + If you don't want BAML to try to auto-load your env vars, you can call manually `reset_baml_env_vars` +with the current environment variables. + + + ```python Python + + from baml_client import b + from baml_client import reset_baml_env_vars + import os + import dotenv + + dotenv.load_dotenv() + reset_baml_env_vars(dict(os.environ)) + ``` + + ```typescript TypeScript + import dotenv from 'dotenv' + // Wait to import the BAML client until after loading environment variables + import { b, resetBamlEnvVars } from 'baml-client' + + dotenv.config() + resetBamlEnvVars(process.env) + ``` + + ```ruby Ruby (beta) + require 'dotenv/load' + + # Wait to import the BAML client until after loading environment variables + # reset_baml_env_vars is not yet implemented in the Ruby client + require 'baml_client' + ``` + + + + + diff --git a/root.code-workspace b/root.code-workspace index dd789bfd1..e15d8138f 100644 --- a/root.code-workspace +++ b/root.code-workspace @@ -9,6 +9,9 @@ { "path": "docs" }, + { + "path": "fern" + }, { "path": "engine" },