update default model to llama3.2 (#6959)

This commit is contained in:
Jeffrey Morgan 2024-09-25 11:11:22 -07:00 committed by GitHub
parent e9e9bdb8d9
commit 55ea963c9e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
29 changed files with 102 additions and 100 deletions

View file

@ -69,7 +69,7 @@ Enable JSON mode by setting the `format` parameter to `json`. This will structur
```shell
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1",
"model": "llama3.2",
"prompt": "Why is the sky blue?"
}'
```
@ -80,7 +80,7 @@ A stream of JSON objects is returned:
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-08-04T08:52:19.385406455-07:00",
"response": "The",
"done": false
@ -102,7 +102,7 @@ To calculate how fast the response is generated in tokens per second (token/s),
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-08-04T19:22:45.499127Z",
"response": "",
"done": true,
@ -124,7 +124,7 @@ A response can be received in one reply when streaming is off.
```shell
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1",
"model": "llama3.2",
"prompt": "Why is the sky blue?",
"stream": false
}'
@ -136,7 +136,7 @@ If `stream` is set to `false`, the response will be a single JSON object:
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-08-04T19:22:45.499127Z",
"response": "The sky is blue because it is the color of the sky.",
"done": true,
@ -194,7 +194,7 @@ curl http://localhost:11434/api/generate -d '{
```shell
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1",
"model": "llama3.2",
"prompt": "What color is the sky at different times of the day? Respond using JSON",
"format": "json",
"stream": false
@ -205,7 +205,7 @@ curl http://localhost:11434/api/generate -d '{
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-11-09T21:07:55.186497Z",
"response": "{\n\"morning\": {\n\"color\": \"blue\"\n},\n\"noon\": {\n\"color\": \"blue-gray\"\n},\n\"afternoon\": {\n\"color\": \"warm gray\"\n},\n\"evening\": {\n\"color\": \"orange\"\n}\n}\n",
"done": true,
@ -327,7 +327,7 @@ If you want to set custom options for the model at runtime rather than in the Mo
```shell
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1",
"model": "llama3.2",
"prompt": "Why is the sky blue?",
"stream": false,
"options": {
@ -368,7 +368,7 @@ curl http://localhost:11434/api/generate -d '{
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-08-04T19:22:45.499127Z",
"response": "The sky is blue because it is the color of the sky.",
"done": true,
@ -390,7 +390,7 @@ If an empty prompt is provided, the model will be loaded into memory.
```shell
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1"
"model": "llama3.2"
}'
```
@ -400,7 +400,7 @@ A single JSON object is returned:
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-12-18T19:52:07.071755Z",
"response": "",
"done": true
@ -415,7 +415,7 @@ If an empty prompt is provided and the `keep_alive` parameter is set to `0`, a m
```shell
curl http://localhost:11434/api/generate -d '{
"model": "llama3.1",
"model": "llama3.2",
"keep_alive": 0
}'
```
@ -426,7 +426,7 @@ A single JSON object is returned:
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2024-09-12T03:54:03.516566Z",
"response": "",
"done": true,
@ -472,7 +472,7 @@ Send a chat message with a streaming response.
```shell
curl http://localhost:11434/api/chat -d '{
"model": "llama3.1",
"model": "llama3.2",
"messages": [
{
"role": "user",
@ -488,7 +488,7 @@ A stream of JSON objects is returned:
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-08-04T08:52:19.385406455-07:00",
"message": {
"role": "assistant",
@ -503,7 +503,7 @@ Final response:
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-08-04T19:22:45.499127Z",
"done": true,
"total_duration": 4883583458,
@ -521,7 +521,7 @@ Final response:
```shell
curl http://localhost:11434/api/chat -d '{
"model": "llama3.1",
"model": "llama3.2",
"messages": [
{
"role": "user",
@ -536,7 +536,7 @@ curl http://localhost:11434/api/chat -d '{
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-12-12T14:13:43.416799Z",
"message": {
"role": "assistant",
@ -560,7 +560,7 @@ Send a chat message with a conversation history. You can use this same approach
```shell
curl http://localhost:11434/api/chat -d '{
"model": "llama3.1",
"model": "llama3.2",
"messages": [
{
"role": "user",
@ -584,7 +584,7 @@ A stream of JSON objects is returned:
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-08-04T08:52:19.385406455-07:00",
"message": {
"role": "assistant",
@ -598,7 +598,7 @@ Final response:
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-08-04T19:22:45.499127Z",
"done": true,
"total_duration": 8113331500,
@ -656,7 +656,7 @@ curl http://localhost:11434/api/chat -d '{
```shell
curl http://localhost:11434/api/chat -d '{
"model": "llama3.1",
"model": "llama3.2",
"messages": [
{
"role": "user",
@ -674,7 +674,7 @@ curl http://localhost:11434/api/chat -d '{
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2023-12-12T14:13:43.416799Z",
"message": {
"role": "assistant",
@ -696,7 +696,7 @@ curl http://localhost:11434/api/chat -d '{
```
curl http://localhost:11434/api/chat -d '{
"model": "llama3.1",
"model": "llama3.2",
"messages": [
{
"role": "user",
@ -735,7 +735,7 @@ curl http://localhost:11434/api/chat -d '{
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at": "2024-07-22T20:33:28.123648Z",
"message": {
"role": "assistant",
@ -771,7 +771,7 @@ If the messages array is empty, the model will be loaded into memory.
```
curl http://localhost:11434/api/chat -d '{
"model": "llama3.1",
"model": "llama3.2",
"messages": []
}'
```
@ -779,7 +779,7 @@ curl http://localhost:11434/api/chat -d '{
##### Response
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at":"2024-09-12T21:17:29.110811Z",
"message": {
"role": "assistant",
@ -798,7 +798,7 @@ If the messages array is empty and the `keep_alive` parameter is set to `0`, a m
```
curl http://localhost:11434/api/chat -d '{
"model": "llama3.1",
"model": "llama3.2",
"messages": [],
"keep_alive": 0
}'
@ -810,7 +810,7 @@ A single JSON object is returned:
```json
{
"model": "llama3.1",
"model": "llama3.2",
"created_at":"2024-09-12T21:33:17.547535Z",
"message": {
"role": "assistant",
@ -989,7 +989,7 @@ Show information about a model including details, modelfile, template, parameter
```shell
curl http://localhost:11434/api/show -d '{
"name": "llama3.1"
"name": "llama3.2"
}'
```
@ -1050,7 +1050,7 @@ Copy a model. Creates a model with another name from an existing model.
```shell
curl http://localhost:11434/api/copy -d '{
"source": "llama3.1",
"source": "llama3.2",
"destination": "llama3-backup"
}'
```
@ -1105,7 +1105,7 @@ Download a model from the ollama library. Cancelled pulls are resumed from where
```shell
curl http://localhost:11434/api/pull -d '{
"name": "llama3.1"
"name": "llama3.2"
}'
```