ollama/runner/llamarunner
Jesse Gross 499ae7311f ollamarunner: Base cached tokens on current prompt
When we restore a sequence from the cache, we split the prompt into
the already used tokens (stored in the cache) and new tokens that
need to be processed. Currently, the references to the used tokens
are coming from the stored previous sequence.

However, even though we know that the used tokens are semantically
equivalent to the prefix of the prompt, tokens can contain pointers
which are no longer valid. As a result, it is better to get the
used tokens from the prompt, which has currently valid pointers.

This doesn't currently have any impact because it isn't possible
to reuse the pointers (which are tensors) anyways. However, it
becomes an issue once we can.
2025-05-15 13:46:20 -07:00
..
cache.go ollamarunner: Base cached tokens on current prompt 2025-05-15 13:46:20 -07:00
cache_test.go Runner for Ollama engine 2025-02-13 17:09:26 -08:00
image.go chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00
image_test.go Runner for Ollama engine 2025-02-13 17:09:26 -08:00
runner.go chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00