We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug
I get
perplexica-backend-1 | error: Unhandled Rejection at: [object Promise], reason: Error: Ollama call failed with status code 500: llama runner process no longer running: -1
To Reproduce Launch Perplexica with OLLAMA = "http://host.docker.internal:11434" in config, launch ollama and configure to use it
OLLAMA = "http://host.docker.internal:11434"
Additional context
Perplexica sees ollama, because I can choose it in settings, it also correctly detects the model that is running.
I checked ollama with
curl http://localhost:11434/api/generate -d '{ "model": "aya", "prompt":"Why is the sky blue?" }'
And it works fine.
The text was updated successfully, but these errors were encountered:
It seems to work fine for me, maybe checkout ollama/ollama#3774
Sorry, something went wrong.
No branches or pull requests
Describe the bug
I get
To Reproduce
Launch Perplexica with
OLLAMA = "http://host.docker.internal:11434"
in config, launch ollama and configure to use itAdditional context
Perplexica sees ollama, because I can choose it in settings, it also correctly detects the model that is running.
I checked ollama with
And it works fine.
The text was updated successfully, but these errors were encountered: