Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Intel B580 -> not able to run Ollama serve on GPU after following guide #12772

Open
Mushtaq-BGA opened this issue Feb 5, 2025 · 3 comments
Open
Assignees

Comments

@Mushtaq-BGA
Copy link

Intel B580 -> not able to run Ollama serve on GPU after following guide

https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/bmg_quickstart.md#32-ollama
https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md

Image

Image

@liu-shaojun
Copy link
Contributor

Hi @Mushtaq-BGA

Based on your screenshot, the output from ollama serve looks normal. Could you share the exact error message you're encountering? Alternatively, you can try running the following commands to see if everything works correctly:

./ollama pull <model_name>
./ollama run <model_name>

Let me know the results!

@Mushtaq-BGA
Copy link
Author

Image

@liu-shaojun
Copy link
Contributor

Debugged the issue on the user's machine, and the root cause was that he had oneAPI version 2024.0 installed. After switching to oneAPI 2024.2, it worked normally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants