-
Notifications
You must be signed in to change notification settings - Fork 10.4k
Issues: ggerganov/llama.cpp
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Feature Request: when llama.cpp can support convert qwen2.5 VL 7B/72B model to gguf?
enhancement
New feature or request
#11541
opened Jan 31, 2025 by
sooit
4 tasks done
Misc. bug: llama-server ignores the stop parameter
bug-unconfirmed
#11538
opened Jan 31, 2025 by
matteoserva
Feature Request: Prefix assistant answer
enhancement
New feature or request
#11536
opened Jan 31, 2025 by
99991
4 tasks done
Feature Request: MoE only load activated expert(s) to GPU while rest non-used experts are not loaded (to CPU/GPU) for DeekSeek-R1 Inference on consumer GPU
enhancement
New feature or request
#11532
opened Jan 30, 2025 by
marvin-0042
4 tasks done
Feature Request: add --no-warmup to llama-qwen2vl-cli
enhancement
New feature or request
#11526
opened Jan 30, 2025 by
fidecastro
4 tasks done
Virus detected in the release b4595 (llama-b4595-bin-win-cuda-cu12.4-x64.zip)
#11523
opened Jan 30, 2025 by
manish-ahuja
Move gguf fuzzers to the llama.cpp repository
enhancement
New feature or request
testing
Everything test related
#11514
opened Jan 30, 2025 by
slaren
Misc. bug: AMD Rcom command error only with cli tools
bug-unconfirmed
#11509
opened Jan 30, 2025 by
pcfreak30
Eval bug: Crash while loading Deepseek R1 in CUDA multi-node RPC setup.
bug-unconfirmed
#11508
opened Jan 30, 2025 by
ortegaalfredo
Feature Request: mixed ROCm+CUDA possible?
enhancement
New feature or request
#11506
opened Jan 30, 2025 by
yshui
4 tasks done
Compile bug: Using the latest llama.cpp code in github, having trouble compiling with rocm-6.3.0 release.
bug-unconfirmed
#11498
opened Jan 29, 2025 by
rlee3030
Feature Request: Support for Deepseek Janus-Pro-7B & Janus-1.3B
enhancement
New feature or request
#11490
opened Jan 29, 2025 by
YorkieDev
4 tasks done
Feature Request: Qwen 2.5 VL
enhancement
New feature or request
#11483
opened Jan 29, 2025 by
bold84
4 tasks done
The last working build of llama.cpp for Windows10 21H2 is 4179
bug-unconfirmed
#11479
opened Jan 29, 2025 by
multimediaconverter
Compile bug: Why is my build keep on failing at the first step?
bug-unconfirmed
#11478
opened Jan 29, 2025 by
arunman1kandan
Eval bug: model is failing an assert error
bug-unconfirmed
#11476
opened Jan 29, 2025 by
ad-astra-video
Research: Benchmarking DeepSeek-R1 IQ1_S 1.58bit
research 🔬
#11474
opened Jan 28, 2025 by
loretoparisi
1 of 5 tasks
Feature Request: YuE (music gen)
enhancement
New feature or request
#11467
opened Jan 28, 2025 by
henk717
4 tasks done
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.