You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, yeah, Page Assist was initially designed to use the Ollama API. Later, I added support for other providers. You can turn off the Ollama error from the settings.
I missed the llama.cpp configuration, but I will include it in the next update. It will work similarly to Ollama, Llamafile, and LM Studio on Page Assist. If the server is running, it will automatically load the models, etc.
Why is it asking for Ollama, if I have Groq?
It always asks first for ollama, fails, then goes to custom model.
Please try to remove it.
And it doesn't work straightforward with llama.cpp, but it should.
Let me set the endpoint and any model name please, that it starts working without seeing ollama error
The text was updated successfully, but these errors were encountered: