Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAI does not support changing the base url #137087

Open
pepijndevos opened this issue Feb 1, 2025 · 4 comments
Open

OpenAI does not support changing the base url #137087

pepijndevos opened this issue Feb 1, 2025 · 4 comments

Comments

@pepijndevos
Copy link

The problem

As indicated by the persistent PRs and persistent rejections in eg #122423, #94051, #94341, #99517, #101356, #121105 there is a clear interest from users to support OpenAI-compatible APIs, but a clear desire from Home Assistant member @balloob to not confuse the experience of actual OpenAI users.

So let's be clear

  • Adding extra fields to OpenAI Conversation is unacceptable
  • Adding a new official integration with OpenAI in its name is probably equally unacceptable

So, let's find a solution.

I think the first issue does require some type of separate integration with its own config flow, but that could internally share code with the OpenAI integration to avoid duplication. There would be a fair bit of boilerplate, but minimal maintenance burden. It would also serve as a clear point for bug reports and handling of divergent behavior, without retroactively generalizing the actual OpenAI integration.

To avoid confusion I would suggest calling this new integration "LiteLLM Conversation", and then using virtual integrations to add other user-facing names like "llama.cpp", "OpenRouter" and whatever are deemed user-relevant services that use an OpenAI-compatible API under the hood.

I think these two things combined would strike a good balance for both users and developers:

  • OpenAI users only see one "OpenAI" integration without extra configuration options
  • A new "subclass" integration minimizes maintenance and duplication of code, while offering a clear point for bug reports and workarounds
  • Virtual integrations make it clear for users that their project eg llama.cpp is supported by the "LiteLLM Conversation" integration.

This issue would be a PR but I would like approval for an acceptable approach before I start building.

I really hope we can find a good solution to add support for OpenAI-compatible services that are clearly in demand without making the experience for OpenAI users worse or confusing. I'm in my own "year of the voice", developing a product around llama.cpp, so having an official integration that supports it matters to me.

What version of Home Assistant Core has the issue?

core-2025.1.2

What was the last working version of Home Assistant Core?

No response

What type of installation are you running?

Home Assistant OS

Integration causing the issue

OpenAI Conversation

Link to integration documentation on our website

https://www.home-assistant.io/integrations/openai_conversation/

Diagnostics information

No response

Example YAML snippet

Anything in the logs that might be useful for us?

Additional information

No response

@home-assistant
Copy link

home-assistant bot commented Feb 1, 2025

Hey there @balloob, mind taking a look at this issue as it has been labeled with an integration (openai_conversation) you are listed as a code owner for? Thanks!

Code owner commands

Code owners of openai_conversation can trigger bot actions by commenting:

  • @home-assistant close Closes the issue.
  • @home-assistant rename Awesome new title Renames the issue.
  • @home-assistant reopen Reopen the issue.
  • @home-assistant unassign openai_conversation Removes the current integration label and assignees on the issue, add the integration domain after the command.
  • @home-assistant add-label needs-more-information Add a label (needs-more-information, problem in dependency, problem in custom component) to the issue.
  • @home-assistant remove-label needs-more-information Remove a label (needs-more-information, problem in dependency, problem in custom component) on the issue.

(message by CodeOwnersMention)


openai_conversation documentation
openai_conversation source
(message by IssueLinks)

@dragon2611
Copy link

There is an Extended OpenAI Conversation fork in HACS that seems to work for openAI compatible API's
I tested it calling Meta-Llama-3.1-8B-Instruct on deepinfra and it seemed to work.

I've not used it long enough to see if an 8B model is good enough or if I need a larger one but it responds pretty much instantly (Picked an 8B param model to keep costs down)

@GarethBlain
Copy link

There is an Extended OpenAI Conversation fork in HACS that seems to work for openAI compatible API's I tested it calling Meta-Llama-3.1-8B-Instruct on deepinfra and it seemed to work.

@dragon2611 do you have a link to it as a stop-gap until @pepijndevos get's the go-ahead to look into this resolution from @balloob :)

I really want to play with AI responses but as I already pay for one secure AI that will work I don't want to pay again for OpenAI and give them all my data. :)

@dragon2611
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants