Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It needs default model/endpoint settings #313

Open
gnusupport opened this issue Jan 28, 2025 · 1 comment
Open

It needs default model/endpoint settings #313

gnusupport opened this issue Jan 28, 2025 · 1 comment
Labels
enhancement New feature or request

Comments

@gnusupport
Copy link

Why is it asking for Ollama, if I have Groq?
It always asks first for ollama, fails, then goes to custom model.

Please try to remove it.

And it doesn't work straightforward with llama.cpp, but it should.

Let me set the endpoint and any model name please, that it starts working without seeing ollama error

@n4ze3m
Copy link
Owner

n4ze3m commented Jan 29, 2025

Hey, yeah, Page Assist was initially designed to use the Ollama API. Later, I added support for other providers. You can turn off the Ollama error from the settings.

image

I missed the llama.cpp configuration, but I will include it in the next update. It will work similarly to Ollama, Llamafile, and LM Studio on Page Assist. If the server is running, it will automatically load the models, etc.

@n4ze3m n4ze3m added the enhancement New feature or request label Jan 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants