ollama/runner/ollamarunner
Jesse Gross c2f5d6662b ollamarunner: Re-enable worst case graph preallocation.
Worst case graph preallocation was disabled by a27462b
"ollamarunner: Temporarily disable worst case graph preallocation"
since it caused crashes with large batches when not using the GPU.

This backports upstream llama.cpp commit f057808
"ggml: Don't assert fail when tensor data changes (#13222)", which
fixes the underlying bug and allows reverting the previous workaround.
2025-05-02 12:22:47 -07:00
..
cache.go kvcache: Add check for values that fall out of sliding window cache 2025-04-02 11:55:48 -07:00
cache_test.go ollamarunner: Preallocate worst case graph at startup 2025-04-08 10:01:28 -07:00
runner.go ollamarunner: Re-enable worst case graph preallocation. 2025-05-02 12:22:47 -07:00