Skip to content

Commit

Permalink
Update README and versions for 22.10 release (#554)
Browse files Browse the repository at this point in the history
  • Loading branch information
mc-nv authored Nov 4, 2022
1 parent 07bb431 commit 0275440
Show file tree
Hide file tree
Showing 7 changed files with 9 additions and 9 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ limitations under the License.

**LATEST RELEASE: You are currently on the main branch which tracks
under-development progress towards the next release. The latest
release of the Triton Model Analyzer is 1.20.0 and is available on
release of the Triton Model Analyzer is 1.21.0 and is available on
branch
[r22.09](https://github.com/triton-inference-server/model_analyzer/tree/r22.09).**
[r22.10](https://github.com/triton-inference-server/model_analyzer/tree/r22.10).**

Triton Model Analyzer is a CLI tool to help with better understanding of the
compute and memory requirements of the
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
1.21.0dev
1.21.0
2 changes: 1 addition & 1 deletion docs/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ profile_models: <comma-delimited-string-list>
[ reload_model_disable: <bool> | default: false]
# Triton Docker image tag used when launching using Docker mode
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:22.09-py3 ]
[ triton_docker_image: <string> | default: nvcr.io/nvidia/tritonserver:22.10-py3 ]
# Triton Server HTTP endpoint url used by Model Analyzer client.".
[ triton_http_endpoint: <string> | default: localhost:8000 ]
Expand Down
2 changes: 1 addition & 1 deletion docs/kubernetes_deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ images:
triton:
image: nvcr.io/nvidia/tritonserver
tag: 22.09-py3
tag: 22.10-py3
```

The model analyzer executable uses the config file defined in `helm-chart/templates/config-map.yaml`. This config can be modified to supply arguments to model analyzer. Only the content under the `config.yaml` section of the file should be modified.
Expand Down
4 changes: 2 additions & 2 deletions docs/quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ git pull origin main
**1. Pull the SDK container:**

```
docker pull nvcr.io/nvidia/tritonserver:22.09-py3-sdk
docker pull nvcr.io/nvidia/tritonserver:22.10-py3-sdk
```

**2. Run the SDK container**
Expand All @@ -59,7 +59,7 @@ docker run -it --gpus all \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start \
-v <path-to-output-model-repo>:<path-to-output-model-repo> \
--net=host nvcr.io/nvidia/tritonserver:22.09-py3-sdk
--net=host nvcr.io/nvidia/tritonserver:22.10-py3-sdk
```

**Replacing** `<path-to-output-model-repo>` with the
Expand Down
2 changes: 1 addition & 1 deletion helm-chart/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,4 @@ images:

triton:
image: nvcr.io/nvidia/tritonserver
tag: 22.09-py3
tag: 22.10-py3
2 changes: 1 addition & 1 deletion model_analyzer/config/input/config_defaults.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@
DEFAULT_RUN_CONFIG_SEARCH_MODE = 'brute'
DEFAULT_RUN_CONFIG_PROFILE_MODELS_CONCURRENTLY_ENABLE = False
DEFAULT_TRITON_LAUNCH_MODE = 'local'
DEFAULT_TRITON_DOCKER_IMAGE = 'nvcr.io/nvidia/tritonserver:22.09-py3'
DEFAULT_TRITON_DOCKER_IMAGE = 'nvcr.io/nvidia/tritonserver:22.10-py3'
DEFAULT_TRITON_HTTP_ENDPOINT = 'localhost:8000'
DEFAULT_TRITON_GRPC_ENDPOINT = 'localhost:8001'
DEFAULT_TRITON_METRICS_URL = 'http://localhost:8002/metrics'
Expand Down

0 comments on commit 0275440

Please sign in to comment.