Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat (provider/venice): Add Venice provider #4414

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

proteanx
Copy link

@proteanx proteanx commented Jan 16, 2025

Venice Provider Integration

Overview

This PR adds support for Venice, a high-performance AI model provider focused on privacy and security. The integration enables both text and image generation capabilities through Venice's API.

Key Features

Text Generation

  • Support for Venice's LLaMA-based models:
    • llama-3.2-3b (small but efficient)
    • llama-3.3-70b (large, powerful)
    • llama-3.1-405b (very large)
    • dolphin-2.9.2-qwen2-72b
    • qwen32b
  • Optional system prompt inclusion via include_venice_system_prompt parameter
  • Streaming support for real-time text generation

Image Generation

  • Support for multiple image generation models:
    • fluently-xl (high-quality generation)
    • flux-dev (standard content filtering)
    • flux-dev-uncensored (no content filtering)
    • pony-realism (specialized for realism)
    • stable-diffusion-3.5
  • Advanced image parameters:
    venice_parameters: {
      style_preset: "photographic", // Options: anime, photographic, digital-art
      safe_mode: true,             // Content filtering
      num_inference_steps: 30,     // Generation quality
      guidance_scale: 7.5,         // Prompt adherence
      height: 1024,               
      width: 1024,
      seed: 12345                  // Reproducibility
    }

Implementation Details

  • OpenAI-compatible interface for easy integration
  • Comprehensive test coverage for both text and image generation
  • Environment variable support (VENICE_API_KEY)
  • Custom configuration options (API endpoint, headers)
  • TypeScript types and documentation

Example Usage

Text Generation:

const { text } = await generateText({
  model: venice('llama-3.3-70b'),
  prompt: 'Write a JavaScript function that sorts a list:',
});

Image Generation:

const { images } = await generateImages({
  model: venice.imageModel('fluently-xl'),
  prompt: 'A serene mountain landscape at sunset',
  venice_parameters: {
    style_preset: 'photographic',
    safe_mode: true
  }
});

Documentation

Full documentation has been added /content/providers/01-ai-sdk-providers/45-venice.mdx

Testing (can provide key if needed)

Includes test suite covering:

  • Provider configuration
  • Text generation
  • Error handling
  • Parameter validation

Dependencies

  • Added @ai-sdk/venice package with minimal dependencies
  • Leverages existing @ai-sdk/openai-compatible for API compatibility

@proteanx proteanx changed the title Venice provider feat (provider/venice): Add Venice provider Jan 16, 2025
@lgrammel
Copy link
Collaborator

lgrammel commented Jan 17, 2025

Can you publish this as a 3rd party provider in a separate repository? We can then add a documentation page under community providers (please open PR for that)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants