mirror of
https://github.com/ollama/ollama.git
synced 2025-05-11 18:36:41 +02:00
Currently there is a single context per sequence, shared all by all multimodal inputs. Since we build a vision encoder graph per image, with a large number of inputs we can eventually hit the maximum number of graph nodes per context. This changes to use a separate context for each image, ensuring that available resource limits are consistent. |
||
---|---|---|
.. | ||
imageproc | ||
input | ||
models | ||
testdata | ||
model.go | ||
model_test.go | ||
process_text.go | ||
process_text_spm.go | ||
process_text_spm_test.go | ||
process_text_test.go |