Skip to content

Latest commit

 

History

History
131 lines (86 loc) · 2.79 KB

development.md

File metadata and controls

131 lines (86 loc) · 2.79 KB

Development

Install prerequisites:

  • Go
  • C/C++ Compiler e.g. Clang on macOS, TDM-GCC (Windows amd64) or llvm-mingw (Windows arm64), GCC/Clang on Linux.

Then build and run Ollama from the root directory of the repository:

go run . serve

macOS (Apple Silicon)

macOS Apple Silicon supports Metal which is built-in to the Ollama binary. No additional steps are required.

macOS (Intel)

Install prerequisites:

  • CMake or brew install cmake

Then, configure and build the project:

cmake -B build
cmake --build build

Lastly, run Ollama:

go run . serve

Windows

Install prerequisites:

Important

Ensure prerequisites are in PATH before running CMake.

Important

ROCm is not compatible with Visual Studio CMake generators. Use -GNinja when configuring the project.

Important

CUDA is only compatible with Visual Studio CMake generators.

Then, configure and build the project:

cmake -B build
cmake --build build --config Release

Lastly, run Ollama:

go run . serve

Windows (ARM)

Windows ARM does not support additional acceleration libraries at this time.

Linux

Install prerequisites:

  • CMake or sudo apt install cmake or sudo dnf install cmake
  • (Optional) AMD GPU support
  • (Optional) NVIDIA GPU support

Important

Ensure prerequisites are in PATH before running CMake.

Then, configure and build the project:

cmake -B build
cmake --build build

Lastly, run Ollama:

go run . serve

Docker

docker build .

ROCm

docker build --build-arg FLAVOR=rocm .

Running tests

To run tests, use go test:

go test ./...

Library detection

Ollama looks for acceleration libraries in the following paths relative to the ollama executable:

  • ./lib/ollama (Windows)
  • ../lib/ollama (Linux)
  • . (macOS)
  • build/lib/ollama (for development)

If the libraries are not found, Ollama will not run with any acceleration libraries.