ollama/llama/patches
Jesse Gross c2f5d6662b ollamarunner: Re-enable worst case graph preallocation.
Worst case graph preallocation was disabled by a27462b
"ollamarunner: Temporarily disable worst case graph preallocation"
since it caused crashes with large batches when not using the GPU.

This backports upstream llama.cpp commit f057808
"ggml: Don't assert fail when tensor data changes (#13222)", which
fixes the underlying bug and allows reverting the previous workaround.
2025-05-02 12:22:47 -07:00
..
0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0002-pretokenizer.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0003-embeddings.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0004-clip-unicode.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0005-solar-pro.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0006-add-mllama-support.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0007-add-unpad-operator.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0008-fix-deepseek-deseret-regex.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0009-maintain-ordering-for-rules-for-grammar.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0010-ensure-KV-cache-is-fully-defragmented.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0011-sort-devices-by-score.patch llama: update to commit 2016f07b (#10352) 2025-04-24 17:26:02 -07:00
0012-add-phony-target-ggml-cpu-for-all-cpu-variants.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0013-remove-amx.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0014-fix-string-arr-kv-loading.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0015-ollama-debug-tensor.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0016-add-model-quantizations.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0017-add-ollama-vocab-for-grammar-support.patch llama: update to commit e1e8e099 (#10513) 2025-05-01 18:24:09 -07:00
0018-ggml-Don-t-assert-fail-when-tensor-data-changes-1322.patch ollamarunner: Re-enable worst case graph preallocation. 2025-05-02 12:22:47 -07:00