ollama/ml/backend
Jesse Gross c2f5d6662b ollamarunner: Re-enable worst case graph preallocation.
Worst case graph preallocation was disabled by a27462b
"ollamarunner: Temporarily disable worst case graph preallocation"
since it caused crashes with large batches when not using the GPU.

This backports upstream llama.cpp commit f057808
"ggml: Don't assert fail when tensor data changes (#13222)", which
fixes the underlying bug and allows reverting the previous workaround.
2025-05-02 12:22:47 -07:00
..
ggml ollamarunner: Re-enable worst case graph preallocation. 2025-05-02 12:22:47 -07:00
backend.go next ollama runner (#7913) 2025-02-13 16:31:21 -08:00