ollama/model
Jesse Gross a7e63b82be ollamarunner: Improve multimodal input handling
Various vision models have different requirements for how they
receive their inputs. For example:
 - Mllama wants images together with text and the image embeddings
   don't themselves have positions or get stored in the main KV cache
 - Llava-style models feed in embeddings similar to tokens and
   images correspond to a varying number of tokens in the cache.

In addition, the strategy for providing inputs must support batching
and multiple sequences, which are managed by the runner. At the same
time, we want to keep data handling fully in the model so that new
architectures are not bottlenecked by runner code which does not
understand their particular requirements.

This provides a method for models to edit the input stream so that
it meets their needs while still being in a format that the runner
understands. This allows the runner to avoid special processing
for different models.

In addition, this fixes a regression where non-vision models may
try to incorrectly interpret images.
2025-03-06 16:54:16 -08:00
..
imageproc imageproc mllama refactor (#7537) 2024-12-14 19:50:15 -08:00
models ollamarunner: Improve multimodal input handling 2025-03-06 16:54:16 -08:00
testdata next ollama runner (#7913) 2025-02-13 16:31:21 -08:00
model.go ollamarunner: Improve multimodal input handling 2025-03-06 16:54:16 -08:00
model_test.go New engine: vision models and auto-fallback (#9113) 2025-03-04 09:03:46 -08:00
process_text.go model: Don't unconditionally add special tokens 2025-03-06 16:54:16 -08:00
process_text_test.go model: Don't unconditionally add special tokens 2025-03-06 16:54:16 -08:00