You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The answer is yes... however that P104-100 card only has 4GB of VRAM and has severely gimped fp16 performance (1:64). IF you want to use it, you will need to use a llama.cpp based backend (like koboldcpp or ollama etc)
Problem
My platform is cpu2666v3 x99 motherboard with 4060ti 16g graphics card and a p104-100 8g graphics card. Can I run your exllamav2 together?
Solution
My platform is cpu2666v3 x99 motherboard with 4060ti 16g graphics card and a p104-100 8g graphics card. Can I run your exllamav2 together?
Alternatives
No response
Explanation
My platform is cpu2666v3 x99 motherboard with 4060ti 16g graphics card and a p104-100 8g graphics card. Can I run your exllamav2 together?
Examples
No response
Additional context
No response
Acknowledgements
The text was updated successfully, but these errors were encountered: