ollama/llm
2024-12-16 18:45:46 -08:00
..
filetype.go llm: bring fileTypes into alignment with llama.cpp (#7819) 2024-11-24 10:33:33 -08:00
ggla.go image processing for llama3.2 (#6963) 2024-10-18 16:12:35 -07:00
ggml.go llm: introduce k/v context quantization (vRAM improvements) (#6279) 2024-12-03 15:57:19 -08:00
ggml_test.go llm: speed up gguf decoding by a lot (#5246) 2024-06-24 21:47:52 -07:00
gguf.go image processing for llama3.2 (#6963) 2024-10-18 16:12:35 -07:00
llm_darwin.go Optimize container images for startup (#6547) 2024-09-12 12:10:30 -07:00
llm_linux.go Optimize container images for startup (#6547) 2024-09-12 12:10:30 -07:00
llm_windows.go runner: Set windows above normal priority (#6905) 2024-09-21 16:54:49 -07:00
memory.go Prevent underflow when FreeMemory < overhead (#8014) 2024-12-10 09:10:40 -08:00
memory_test.go all: fix typos in documentation, code, and comments (#7021) 2024-12-10 12:58:06 -08:00
server.go llm: loosen format check to default to no format (#8127) 2024-12-16 18:45:46 -08:00
status.go Improve crash reporting (#7728) 2024-11-19 16:26:57 -08:00