Install prerequisites:
- Go
- C/C++ Compiler e.g. Clang on macOS, TDM-GCC (Windows amd64) or llvm-mingw (Windows arm64), GCC/Clang on Linux.
Then build and run Ollama from the root directory of the repository:
go run . serve
macOS Apple Silicon supports Metal which is built-in to the Ollama binary. No additional steps are required.
Install prerequisites:
- CMake or
brew install cmake
Then, configure and build the project:
cmake -B build
cmake --build build
Lastly, run Ollama:
go run . serve
Install prerequisites:
- CMake
- Visual Studio 2022 including the Native Desktop Workload
- (Optional) AMD GPU support
- (Optional) NVIDIA GPU support
Important
Ensure prerequisites are in PATH
before running CMake.
Important
ROCm is not compatible with Visual Studio CMake generators. Use -GNinja
when configuring the project.
Important
CUDA is only compatible with Visual Studio CMake generators.
Then, configure and build the project:
cmake -B build
cmake --build build --config Release
Lastly, run Ollama:
go run . serve
Windows ARM does not support additional acceleration libraries at this time.
Install prerequisites:
- CMake or
sudo apt install cmake
orsudo dnf install cmake
- (Optional) AMD GPU support
- (Optional) NVIDIA GPU support
Important
Ensure prerequisites are in PATH
before running CMake.
Then, configure and build the project:
cmake -B build
cmake --build build
Lastly, run Ollama:
go run . serve
docker build .
docker build --build-arg FLAVOR=rocm .
To run tests, use go test
:
go test ./...
Ollama looks for acceleration libraries in the following paths relative to the ollama
executable:
./lib/ollama
(Windows)../lib/ollama
(Linux).
(macOS)build/lib/ollama
(for development)
If the libraries are not found, Ollama will not run with any acceleration libraries.