mirror of
https://github.com/ollama/ollama.git
synced 2025-05-11 18:36:41 +02:00
Revamp the dynamic library shim
This switches the default llama.cpp to be CPU based, and builds the GPU variants as dynamically loaded libraries which we can select at runtime. This also bumps the ROCm library to version 6 given 5.7 builds don't work on the latest ROCm library that just shipped.
This commit is contained in:
parent
1d1eb1688c
commit
7555ea44f8
14 changed files with 272 additions and 280 deletions
|
@ -21,6 +21,7 @@ func GetGPUInfo() GpuInfo {
|
|||
|
||||
return GpuInfo{
|
||||
Driver: "METAL",
|
||||
Library: "default",
|
||||
TotalMemory: 0,
|
||||
FreeMemory: 0,
|
||||
}
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue