mirror of
https://github.com/ollama/ollama.git
synced 2025-05-10 18:06:33 +02:00
lower default num parallel to 2
this is in part to "pay" for #10452, which doubled the default context length. The combination isn't fully neutral though, because even though the old 4x2k limit and the new 2x4k limit are memory equivalent, the 1x fallback is larger with 4k
This commit is contained in:
parent
6ec71d8fb6
commit
fe5b9bb21b
1 changed files with 1 additions and 1 deletions
|
@ -58,7 +58,7 @@ var defaultModelsPerGPU = 3
|
|||
// Default automatic value for parallel setting
|
||||
// Model will still need to fit in VRAM. If this setting won't fit
|
||||
// we'll back off down to 1 to try to get it to fit
|
||||
var defaultParallel = 4
|
||||
var defaultParallel = 2
|
||||
|
||||
var ErrMaxQueue = errors.New("server busy, please try again. maximum pending requests exceeded")
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue